From: Woodhouse, David Date: Wed, 19 Dec 2012 13:25:35 +0000 (+0000) Subject: intel-iommu: Free old page tables before creating superpage X-Git-Tag: firefly_0821_release~7541^2~105 X-Git-Url: http://demsky.eecs.uci.edu/git/?a=commitdiff_plain;h=5b8692bf7ca5beed9bb75cf97b47dc1d3e5691e2;p=firefly-linux-kernel-4.4.55.git intel-iommu: Free old page tables before creating superpage commit 6491d4d02893d9787ba67279595990217177b351 upstream. The dma_pte_free_pagetable() function will only free a page table page if it is asked to free the *entire* 2MiB range that it covers. So if a page table page was used for one or more small mappings, it's likely to end up still present in the page tables... but with no valid PTEs. This was fine when we'd only be repopulating it with 4KiB PTEs anyway but the same virtual address range can end up being reused for a *large-page* mapping. And in that case were were trying to insert the large page into the second-level page table, and getting a complaint from the sanity check in __domain_mapping() because there was already a corresponding entry. This was *relatively* harmless; it led to a memory leak of the old page table page, but no other ill-effects. Fix it by calling dma_pte_clear_range (hopefully redundant) and dma_pte_free_pagetable() before setting up the new large page. Signed-off-by: David Woodhouse Tested-by: Ravi Murty Tested-by: Sudeep Dutt Signed-off-by: Linus Torvalds Signed-off-by: CAI Qian Signed-off-by: Greg Kroah-Hartman --- diff --git a/drivers/pci/intel-iommu.c b/drivers/pci/intel-iommu.c index 0b4dbcdb0cc0..3f9a8912a15a 100644 --- a/drivers/pci/intel-iommu.c +++ b/drivers/pci/intel-iommu.c @@ -1793,10 +1793,17 @@ static int __domain_mapping(struct dmar_domain *domain, unsigned long iov_pfn, if (!pte) return -ENOMEM; /* It is large page*/ - if (largepage_lvl > 1) + if (largepage_lvl > 1) { pteval |= DMA_PTE_LARGE_PAGE; - else + /* Ensure that old small page tables are removed to make room + for superpage, if they exist. */ + dma_pte_clear_range(domain, iov_pfn, + iov_pfn + lvl_to_nr_pages(largepage_lvl) - 1); + dma_pte_free_pagetable(domain, iov_pfn, + iov_pfn + lvl_to_nr_pages(largepage_lvl) - 1); + } else { pteval &= ~(uint64_t)DMA_PTE_LARGE_PAGE; + } } /* We don't need lock here, nobody else