From: Al Viro Date: Mon, 5 Mar 2012 06:40:29 +0000 (+0000) Subject: flush_tlb_range() needs ->page_table_lock when ->mmap_sem is not held X-Git-Tag: firefly_0821_release~3680^2~3431 X-Git-Url: http://demsky.eecs.uci.edu/git/?a=commitdiff_plain;h=cd2934a3b3057eb048f8b4fb82e941d24a043207;p=firefly-linux-kernel-4.4.55.git flush_tlb_range() needs ->page_table_lock when ->mmap_sem is not held All other callers already hold either ->mmap_sem (exclusive) or ->page_table_lock. And we need it because some page table flushing instanced do work explicitly with ge tables. See e.g. arch/powerpc/mm/tlb_hash32.c, flush_tlb_range() and flush_range() in there. The same goes for uml, with a lot more extensive playing with page tables. Almost all callers are actually fine - flush_tlb_range() may have no need to bother playing with page tables, but it can do so safely; again, this caller is the sole exception - everything else either has exclusive ->mmap_sem on the mm in question, or mm->page_table_lock is held. Signed-off-by: Al Viro Signed-off-by: Linus Torvalds --- diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 5f34bd8dda34..a876871f6be5 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2277,8 +2277,8 @@ void __unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, set_page_dirty(page); list_add(&page->lru, &page_list); } - spin_unlock(&mm->page_table_lock); flush_tlb_range(vma, start, end); + spin_unlock(&mm->page_table_lock); mmu_notifier_invalidate_range_end(mm, start, end); list_for_each_entry_safe(page, tmp, &page_list, lru) { page_remove_rmap(page);