Avoid double memclear() in SLOB/SLUB
authorLinus Torvalds <torvalds@woody.linux-foundation.org>
Sun, 9 Dec 2007 18:14:36 +0000 (10:14 -0800)
committerLinus Torvalds <torvalds@woody.linux-foundation.org>
Sun, 9 Dec 2007 18:17:52 +0000 (10:17 -0800)
Both slob and slub react to __GFP_ZERO by clearing the allocation, which
means that passing the GFP_ZERO bit down to the page allocator is just
wasteful and pointless.

Acked-by: Matt Mackall <mpm@selenic.com>
Reviewed-by: Pekka Enberg <penberg@cs.helsinki.fi>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/slob.c
mm/slub.c

index ee2ef8af0d43194536bb9e6744166c33002c8e9d..773a7aa80ab5ce53883cd7ad4d07a6019a175fe0 100644 (file)
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -330,7 +330,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
 
        /* Not enough space: must allocate a new page */
        if (!b) {
-               b = slob_new_page(gfp, 0, node);
+               b = slob_new_page(gfp & ~__GFP_ZERO, 0, node);
                if (!b)
                        return 0;
                sp = (struct slob_page *)virt_to_page(b);
index b9f37cb0f2e6a61d80eeca2b3c9dc3799e863bce..9c1d9f3b364f63d7a6be5cdc5f98f2b18f8f797a 100644 (file)
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1468,6 +1468,9 @@ static void *__slab_alloc(struct kmem_cache *s,
        void **object;
        struct page *new;
 
+       /* We handle __GFP_ZERO in the caller */
+       gfpflags &= ~__GFP_ZERO;
+
        if (!c->page)
                goto new_slab;