From: Laura Abbott Date: Wed, 21 Sep 2016 22:25:04 +0000 (-0700) Subject: BACKPORT: arm64: Correctly bounds check virt_addr_valid X-Git-Tag: firefly_0821_release~176^2~124 X-Git-Url: http://demsky.eecs.uci.edu/git/?a=commitdiff_plain;h=7678802f665436bd92822f99f5e6cb5faaf577b1;p=firefly-linux-kernel-4.4.55.git BACKPORT: arm64: Correctly bounds check virt_addr_valid virt_addr_valid is supposed to return true if and only if virt_to_page returns a valid page structure. The current macro does math on whatever address is given and passes that to pfn_valid to verify. vmalloc and module addresses can happen to generate a pfn that 'happens' to be valid. Fix this by only performing the pfn_valid check on addresses that have the potential to be valid. Acked-by: Mark Rutland Signed-off-by: Laura Abbott Signed-off-by: Will Deacon Bug: 31374226 Change-Id: I75cbeb3edb059f19af992b7f5d0baa283f95991b (cherry picked from commit ca219452c6b8a6cd1369b6a78b1cf069d0386865) Signed-off-by: Sami Tolvanen --- diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 12f8a00fb3f1..ba1b3409d7ed 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -193,7 +193,11 @@ static inline void *phys_to_virt(phys_addr_t x) #define ARCH_PFN_OFFSET ((unsigned long)PHYS_PFN_OFFSET) #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT) -#define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT) +#define _virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT) + +#define _virt_addr_is_linear(kaddr) (((u64)(kaddr)) >= PAGE_OFFSET) +#define virt_addr_valid(kaddr) (_virt_addr_is_linear(kaddr) && \ + _virt_addr_valid(kaddr)) #endif