[PATCH] Auto size the per cpu area.
authorEric W. Biederman <ebiederm@xmission.com>
Tue, 26 Sep 2006 08:52:35 +0000 (10:52 +0200)
committerAndi Kleen <andi@basil.nowhere.org>
Tue, 26 Sep 2006 08:52:35 +0000 (10:52 +0200)
Now for a completely different but trivial approach.
I just boot tested it with 255 CPUS and everything worked.

Currently everything (except module data) we place in
the per cpu area we know about at compile time.  So
instead of allocating a fixed size for the per_cpu area
allocate the number of bytes we need plus a fixed constant
for to be used for modules.

It isn't perfect but it is much less of a pain to
work with than what we are doing now.

AK: fixed warning

Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Andi Kleen <ak@suse.de>
arch/x86_64/kernel/setup64.c
include/asm-x86_64/percpu.h

index b09e60fa96b4f4d0ab9dd7e47e8e9769d318e744..e85cfbb49b6382aff5cdd56a6a31d21032a07869 100644 (file)
@@ -95,12 +95,9 @@ void __init setup_per_cpu_areas(void)
 #endif
 
        /* Copy section for each CPU (we discard the original) */
-       size = ALIGN(__per_cpu_end - __per_cpu_start, SMP_CACHE_BYTES);
-#ifdef CONFIG_MODULES
-       if (size < PERCPU_ENOUGH_ROOM)
-               size = PERCPU_ENOUGH_ROOM;
-#endif
+       size = PERCPU_ENOUGH_ROOM;
 
+       printk(KERN_INFO "PERCPU: Allocating %lu bytes of per cpu data\n", size);
        for_each_cpu_mask (i, cpu_possible_map) {
                char *ptr;
 
index 08dd9f9dda81854dc1b222dacd24ef209e6be29f..39d2bab9b5205f4aa0d97e77c868b38f73109482 100644 (file)
 
 #include <asm/pda.h>
 
+#ifdef CONFIG_MODULES
+# define PERCPU_MODULE_RESERVE 8192
+#else
+# define PERCPU_MODULE_RESERVE 0
+#endif
+
+#define PERCPU_ENOUGH_ROOM \
+       (ALIGN(__per_cpu_end - __per_cpu_start, SMP_CACHE_BYTES) + \
+        PERCPU_MODULE_RESERVE)
+
 #define __per_cpu_offset(cpu) (cpu_pda(cpu)->data_offset)
 #define __my_cpu_offset() read_pda(data_offset)