x86, mm: Align start address to correct big page size
authorYinghai Lu <yinghai@kernel.org>
Sat, 17 Nov 2012 03:38:54 +0000 (19:38 -0800)
committerH. Peter Anvin <hpa@linux.intel.com>
Sat, 17 Nov 2012 19:59:15 +0000 (11:59 -0800)
We are going to use buffer in BRK to map small range just under memory top,
and use those new mapped ram to map ram range under it.

The ram range that will be mapped at first could be only page aligned,
but ranges around it are ram too, we could use bigger page to map it to
avoid small page size.

We will adjust page_size_mask in following patch:
x86, mm: Use big page size for small memory range
to use big page size for small ram range.

Before that patch, this patch will make sure start address to be
aligned down according to bigger page size, otherwise entry in page
page will not have correct value.

Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1353123563-3103-18-git-send-email-yinghai@kernel.org
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
arch/x86/mm/init_32.c
arch/x86/mm/init_64.c

index 11a58001b4cef4c652721858c8e7f24ebbf22c6e..27f7fc69cf8a978af8ada9b4e9f173789e7c5ccb 100644 (file)
@@ -310,6 +310,7 @@ repeat:
                                        __pgprot(PTE_IDENT_ATTR |
                                                 _PAGE_PSE);
 
+                               pfn &= PMD_MASK >> PAGE_SHIFT;
                                addr2 = (pfn + PTRS_PER_PTE-1) * PAGE_SIZE +
                                        PAGE_OFFSET + PAGE_SIZE-1;
 
index 32c7e3847cf619ec28f9732df1879535997b7153..869372a5d3cf0a3ab026483a2923a08fb093802c 100644 (file)
@@ -464,7 +464,7 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long address, unsigned long end,
                        pages++;
                        spin_lock(&init_mm.page_table_lock);
                        set_pte((pte_t *)pmd,
-                               pfn_pte(address >> PAGE_SHIFT,
+                               pfn_pte((address & PMD_MASK) >> PAGE_SHIFT,
                                        __pgprot(pgprot_val(prot) | _PAGE_PSE)));
                        spin_unlock(&init_mm.page_table_lock);
                        last_map_addr = next;
@@ -541,7 +541,8 @@ phys_pud_init(pud_t *pud_page, unsigned long addr, unsigned long end,
                        pages++;
                        spin_lock(&init_mm.page_table_lock);
                        set_pte((pte_t *)pud,
-                               pfn_pte(addr >> PAGE_SHIFT, PAGE_KERNEL_LARGE));
+                               pfn_pte((addr & PUD_MASK) >> PAGE_SHIFT,
+                                       PAGE_KERNEL_LARGE));
                        spin_unlock(&init_mm.page_table_lock);
                        last_map_addr = next;
                        continue;