fix hugetlb page allocation leak
authorKen Chen <kenchen@google.com>
Tue, 24 Jul 2007 01:44:00 +0000 (18:44 -0700)
committerLinus Torvalds <torvalds@woody.linux-foundation.org>
Tue, 24 Jul 2007 19:24:59 +0000 (12:24 -0700)
dequeue_huge_page() has a serious memory leak upon hugetlb page
allocation.  The for loop continues on allocating hugetlb pages out of
all allowable zone, where this function is supposedly only dequeue one
and only one pages.

Fixed it by breaking out of the for loop once a hugetlb page is found.

Signed-off-by: Ken Chen <kenchen@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/hugetlb.c

index f127940ec24fc8c52e6492de4e5729ce330c487b..d7ca59d66c5929194da1b9c0197d924aba2106cd 100644 (file)
@@ -84,6 +84,7 @@ static struct page *dequeue_huge_page(struct vm_area_struct *vma,
                        list_del(&page->lru);
                        free_huge_pages--;
                        free_huge_pages_node[nid]--;
+                       break;
                }
        }
        return page;