1 config SELECT_MEMORY_MODEL
3 depends on ARCH_SELECT_MEMORY_MODEL
7 depends on SELECT_MEMORY_MODEL
8 default DISCONTIGMEM_MANUAL if ARCH_DISCONTIGMEM_DEFAULT
9 default SPARSEMEM_MANUAL if ARCH_SPARSEMEM_DEFAULT
10 default FLATMEM_MANUAL
14 depends on !(ARCH_DISCONTIGMEM_ENABLE || ARCH_SPARSEMEM_ENABLE) || ARCH_FLATMEM_ENABLE
16 This option allows you to change some of the ways that
17 Linux manages its memory internally. Most users will
18 only have one option here: FLATMEM. This is normal
21 Some users of more advanced features like NUMA and
22 memory hotplug may have different options here.
23 DISCONTIGMEM is a more mature, better tested system,
24 but is incompatible with memory hotplug and may suffer
25 decreased performance over SPARSEMEM. If unsure between
26 "Sparse Memory" and "Discontiguous Memory", choose
27 "Discontiguous Memory".
29 If unsure, choose this option (Flat Memory) over any other.
31 config DISCONTIGMEM_MANUAL
32 bool "Discontiguous Memory"
33 depends on ARCH_DISCONTIGMEM_ENABLE
35 This option provides enhanced support for discontiguous
36 memory systems, over FLATMEM. These systems have holes
37 in their physical address spaces, and this option provides
38 more efficient handling of these holes. However, the vast
39 majority of hardware has quite flat address spaces, and
40 can have degraded performance from the extra overhead that
43 Many NUMA configurations will have this as the only option.
45 If unsure, choose "Flat Memory" over this option.
47 config SPARSEMEM_MANUAL
49 depends on ARCH_SPARSEMEM_ENABLE
51 This will be the only option for some systems, including
52 memory hotplug systems. This is normal.
54 For many other systems, this will be an alternative to
55 "Discontiguous Memory". This option provides some potential
56 performance benefits, along with decreased code complexity,
57 but it is newer, and more experimental.
59 If unsure, choose "Discontiguous Memory" or "Flat Memory"
66 depends on (!SELECT_MEMORY_MODEL && ARCH_DISCONTIGMEM_ENABLE) || DISCONTIGMEM_MANUAL
70 depends on (!SELECT_MEMORY_MODEL && ARCH_SPARSEMEM_ENABLE) || SPARSEMEM_MANUAL
74 depends on (!DISCONTIGMEM && !SPARSEMEM) || FLATMEM_MANUAL
76 config FLAT_NODE_MEM_MAP
81 # Both the NUMA code and DISCONTIGMEM use arrays of pg_data_t's
82 # to represent different areas of memory. This variable allows
83 # those dependencies to exist individually.
85 config NEED_MULTIPLE_NODES
87 depends on DISCONTIGMEM || NUMA
89 config HAVE_MEMORY_PRESENT
91 depends on ARCH_HAVE_MEMORY_PRESENT || SPARSEMEM
94 # SPARSEMEM_EXTREME (which is the default) does some bootmem
95 # allocations when memory_present() is called. If this cannot
96 # be done on your architecture, select this option. However,
97 # statically allocating the mem_section[] array can potentially
98 # consume vast quantities of .bss, so be careful.
100 # This option will also potentially produce smaller runtime code
101 # with gcc 3.4 and later.
103 config SPARSEMEM_STATIC
107 # Architecture platforms which require a two level mem_section in SPARSEMEM
108 # must select this option. This is usually for architecture platforms with
109 # an extremely sparse physical address space.
111 config SPARSEMEM_EXTREME
113 depends on SPARSEMEM && !SPARSEMEM_STATIC
115 config SPARSEMEM_VMEMMAP_ENABLE
118 config SPARSEMEM_ALLOC_MEM_MAP_TOGETHER
120 depends on SPARSEMEM && X86_64
122 config SPARSEMEM_VMEMMAP
123 bool "Sparse Memory virtual memmap"
124 depends on SPARSEMEM && SPARSEMEM_VMEMMAP_ENABLE
127 SPARSEMEM_VMEMMAP uses a virtually mapped memmap to optimise
128 pfn_to_page and page_to_pfn operations. This is the most
129 efficient option when sufficient kernel resources are available.
134 config HAVE_MEMBLOCK_NODE_MAP
137 config ARCH_DISCARD_MEMBLOCK
143 config MEMORY_ISOLATION
147 boolean "Enable to assign a node which has only movable memory"
148 depends on HAVE_MEMBLOCK
149 depends on NO_BOOTMEM
154 Allow a node to have only movable memory. Pages used by the kernel,
155 such as direct mapping pages cannot be migrated. So the corresponding
156 memory device cannot be hotplugged. This option allows the following
158 - When the system is booting, node full of hotpluggable memory can
159 be arranged to have only movable memory so that the whole node can
160 be hot-removed. (need movable_node boot option specified).
161 - After the system is up, the option allows users to online all the
162 memory of a node as movable memory so that the whole node can be
165 Users who don't use the memory hotplug feature are fine with this
166 option on since they don't specify movable_node boot option or they
167 don't online memory as movable.
169 Say Y here if you want to hotplug a whole node.
170 Say N here if you want kernel to use memory on all nodes evenly.
173 # Only be set on architectures that have completely implemented memory hotplug
174 # feature. If you are not sure, don't touch it.
176 config HAVE_BOOTMEM_INFO_NODE
179 # eventually, we can have this option just 'select SPARSEMEM'
180 config MEMORY_HOTPLUG
181 bool "Allow for memory hot-add"
182 depends on SPARSEMEM || X86_64_ACPI_NUMA
183 depends on ARCH_ENABLE_MEMORY_HOTPLUG
184 depends on (IA64 || X86 || PPC_BOOK3S_64 || SUPERH || S390)
186 config MEMORY_HOTPLUG_SPARSE
188 depends on SPARSEMEM && MEMORY_HOTPLUG
190 config MEMORY_HOTREMOVE
191 bool "Allow for memory hot remove"
192 select MEMORY_ISOLATION
193 select HAVE_BOOTMEM_INFO_NODE if (X86_64 || PPC64)
194 depends on MEMORY_HOTPLUG && ARCH_ENABLE_MEMORY_HOTREMOVE
198 # If we have space for more page flags then we can enable additional
199 # optimizations and functionality.
201 # Regular Sparsemem takes page flag bits for the sectionid if it does not
202 # use a virtual memmap. Disable extended page flags for 32 bit platforms
203 # that require the use of a sectionid in the page flags.
205 config PAGEFLAGS_EXTENDED
207 depends on 64BIT || SPARSEMEM_VMEMMAP || !SPARSEMEM
209 # Heavily threaded applications may benefit from splitting the mm-wide
210 # page_table_lock, so that faults on different parts of the user address
211 # space can be handled with less contention: split it at this NR_CPUS.
212 # Default to 4 for wider testing, though 8 might be more appropriate.
213 # ARM's adjust_pte (unused if VIPT) depends on mm-wide page_table_lock.
214 # PA-RISC 7xxx's spinlock_t would enlarge struct page from 32 to 44 bytes.
215 # DEBUG_SPINLOCK and DEBUG_LOCK_ALLOC spinlock_t also enlarge struct page.
217 config SPLIT_PTLOCK_CPUS
219 default "999999" if !MMU
220 default "999999" if ARM && !CPU_CACHE_VIPT
221 default "999999" if PARISC && !PA20
224 config ARCH_ENABLE_SPLIT_PMD_PTLOCK
228 # support for memory balloon compaction
229 config BALLOON_COMPACTION
230 bool "Allow for balloon memory compaction/migration"
232 depends on COMPACTION && VIRTIO_BALLOON
234 Memory fragmentation introduced by ballooning might reduce
235 significantly the number of 2MB contiguous memory blocks that can be
236 used within a guest, thus imposing performance penalties associated
237 with the reduced number of transparent huge pages that could be used
238 by the guest workload. Allowing the compaction & migration for memory
239 pages enlisted as being part of memory balloon devices avoids the
240 scenario aforementioned and helps improving memory defragmentation.
243 # support for memory compaction
245 bool "Allow for memory compaction"
250 Allows the compaction of memory for the allocation of huge pages.
253 # support for page migration
256 bool "Page migration"
258 depends on (NUMA || ARCH_ENABLE_MEMORY_HOTREMOVE || COMPACTION || CMA) && MMU
260 Allows the migration of the physical location of pages of processes
261 while the virtual addresses are not changed. This is useful in
262 two situations. The first is on NUMA systems to put pages nearer
263 to the processors accessing. The second is when allocating huge
264 pages as migration can relocate pages to satisfy a huge page
265 allocation instead of reclaiming.
267 config PHYS_ADDR_T_64BIT
268 def_bool 64BIT || ARCH_PHYS_ADDR_T_64BIT
272 default "0" if !ZONE_DMA
276 bool "Enable bounce buffers"
278 depends on BLOCK && MMU && (ZONE_DMA || HIGHMEM)
280 Enable bounce buffers for devices that cannot access
281 the full range of memory available to the CPU. Enabled
282 by default when ZONE_DMA or HIGHMEM is selected, but you
283 may say n to override this.
285 # On the 'tile' arch, USB OHCI needs the bounce pool since tilegx will often
286 # have more than 4GB of memory, but we don't currently use the IOTLB to present
287 # a 32-bit address to OHCI. So we need to use a bounce pool instead.
289 # We also use the bounce pool to provide stable page writes for jbd. jbd
290 # initiates buffer writeback without locking the page or setting PG_writeback,
291 # and fixing that behavior (a second time; jbd2 doesn't have this problem) is
292 # a major rework effort. Instead, use the bounce buffer to snapshot pages
293 # (until jbd goes away). The only jbd user is ext3.
294 config NEED_BOUNCE_POOL
296 default y if (TILE && USB_OHCI_HCD) || (BLK_DEV_INTEGRITY && JBD)
307 An architecture should select this if it implements the
308 deprecated interface virt_to_bus(). All new architectures
309 should probably not select this.
316 bool "Enable KSM for page merging"
319 Enable Kernel Samepage Merging: KSM periodically scans those areas
320 of an application's address space that an app has advised may be
321 mergeable. When it finds pages of identical content, it replaces
322 the many instances by a single page with that content, so
323 saving memory until one or another app needs to modify the content.
324 Recommended for use with KVM, or with other duplicative applications.
325 See Documentation/vm/ksm.txt for more information: KSM is inactive
326 until a program has madvised that an area is MADV_MERGEABLE, and
327 root has set /sys/kernel/mm/ksm/run to 1 (if CONFIG_SYSFS is set).
329 config DEFAULT_MMAP_MIN_ADDR
330 int "Low address space to protect from user allocation"
334 This is the portion of low virtual memory which should be protected
335 from userspace allocation. Keeping a user from writing to low pages
336 can help reduce the impact of kernel NULL pointer bugs.
338 For most ia64, ppc64 and x86 users with lots of address space
339 a value of 65536 is reasonable and should cause no problems.
340 On arm and other archs it should not be higher than 32768.
341 Programs which use vm86 functionality or have some need to map
342 this low address space will need CAP_SYS_RAWIO or disable this
343 protection by setting the value to 0.
345 This value can be changed after boot using the
346 /proc/sys/vm/mmap_min_addr tunable.
348 config ARCH_SUPPORTS_MEMORY_FAILURE
351 config MEMORY_FAILURE
353 depends on ARCH_SUPPORTS_MEMORY_FAILURE
354 bool "Enable recovery from hardware memory errors"
355 select MEMORY_ISOLATION
357 Enables code to recover from some memory failures on systems
358 with MCA recovery. This allows a system to continue running
359 even when some of its memory has uncorrected errors. This requires
360 special hardware support and typically ECC memory.
362 config HWPOISON_INJECT
363 tristate "HWPoison pages injector"
364 depends on MEMORY_FAILURE && DEBUG_KERNEL && PROC_FS
365 select PROC_PAGE_MONITOR
367 config NOMMU_INITIAL_TRIM_EXCESS
368 int "Turn on mmap() excess space trimming before booting"
372 The NOMMU mmap() frequently needs to allocate large contiguous chunks
373 of memory on which to store mappings, but it can only ask the system
374 allocator for chunks in 2^N*PAGE_SIZE amounts - which is frequently
375 more than it requires. To deal with this, mmap() is able to trim off
376 the excess and return it to the allocator.
378 If trimming is enabled, the excess is trimmed off and returned to the
379 system allocator, which can cause extra fragmentation, particularly
380 if there are a lot of transient processes.
382 If trimming is disabled, the excess is kept, but not used, which for
383 long-term mappings means that the space is wasted.
385 Trimming can be dynamically controlled through a sysctl option
386 (/proc/sys/vm/nr_trim_pages) which specifies the minimum number of
387 excess pages there must be before trimming should occur, or zero if
388 no trimming is to occur.
390 This option specifies the initial value of this option. The default
391 of 1 says that all excess pages should be trimmed.
393 See Documentation/nommu-mmap.txt for more information.
395 config TRANSPARENT_HUGEPAGE
396 bool "Transparent Hugepage Support"
397 depends on HAVE_ARCH_TRANSPARENT_HUGEPAGE
400 Transparent Hugepages allows the kernel to use huge pages and
401 huge tlb transparently to the applications whenever possible.
402 This feature can improve computing performance to certain
403 applications by speeding up page faults during memory
404 allocation, by reducing the number of tlb misses and by speeding
405 up the pagetable walking.
407 If memory constrained on embedded, you may want to say N.
410 prompt "Transparent Hugepage Support sysfs defaults"
411 depends on TRANSPARENT_HUGEPAGE
412 default TRANSPARENT_HUGEPAGE_ALWAYS
414 Selects the sysfs defaults for Transparent Hugepage Support.
416 config TRANSPARENT_HUGEPAGE_ALWAYS
419 Enabling Transparent Hugepage always, can increase the
420 memory footprint of applications without a guaranteed
421 benefit but it will work automatically for all applications.
423 config TRANSPARENT_HUGEPAGE_MADVISE
426 Enabling Transparent Hugepage madvise, will only provide a
427 performance improvement benefit to the applications using
428 madvise(MADV_HUGEPAGE) but it won't risk to increase the
429 memory footprint of applications without a guaranteed
433 config CROSS_MEMORY_ATTACH
434 bool "Cross Memory Support"
438 Enabling this option adds the system calls process_vm_readv and
439 process_vm_writev which allow a process with the correct privileges
440 to directly read from or write to to another process's address space.
441 See the man page for more details.
444 # UP and nommu archs use km based percpu allocator
446 config NEED_PER_CPU_KM
452 bool "Enable cleancache driver to cache clean pages if tmem is present"
455 Cleancache can be thought of as a page-granularity victim cache
456 for clean pages that the kernel's pageframe replacement algorithm
457 (PFRA) would like to keep around, but can't since there isn't enough
458 memory. So when the PFRA "evicts" a page, it first attempts to use
459 cleancache code to put the data contained in that page into
460 "transcendent memory", memory that is not directly accessible or
461 addressable by the kernel and is of unknown and possibly
462 time-varying size. And when a cleancache-enabled
463 filesystem wishes to access a page in a file on disk, it first
464 checks cleancache to see if it already contains it; if it does,
465 the page is copied into the kernel and a disk access is avoided.
466 When a transcendent memory driver is available (such as zcache or
467 Xen transcendent memory), a significant I/O reduction
468 may be achieved. When none is available, all cleancache calls
469 are reduced to a single pointer-compare-against-NULL resulting
470 in a negligible performance hit.
472 If unsure, say Y to enable cleancache
475 bool "Enable frontswap to cache swap pages if tmem is present"
479 Frontswap is so named because it can be thought of as the opposite
480 of a "backing" store for a swap device. The data is stored into
481 "transcendent memory", memory that is not directly accessible or
482 addressable by the kernel and is of unknown and possibly
483 time-varying size. When space in transcendent memory is available,
484 a significant swap I/O reduction may be achieved. When none is
485 available, all frontswap calls are reduced to a single pointer-
486 compare-against-NULL resulting in a negligible performance hit
487 and swap data is stored as normal on the matching swap device.
489 If unsure, say Y to enable frontswap.
492 bool "Contiguous Memory Allocator"
493 depends on HAVE_MEMBLOCK && MMU
495 select MEMORY_ISOLATION
497 This enables the Contiguous Memory Allocator which allows other
498 subsystems to allocate big physically-contiguous blocks of memory.
499 CMA reserves a region of memory and allows only movable pages to
500 be allocated from it. This way, the kernel can use the memory for
501 pagecache and when a subsystem requests for contiguous area, the
502 allocated pages are migrated away to serve the contiguous request.
507 bool "CMA debug messages (DEVELOPMENT)"
508 depends on DEBUG_KERNEL && CMA
510 Turns on debug messages in CMA. This produces KERN_DEBUG
511 messages for every CMA call as well as various messages while
512 processing calls such as dma_alloc_from_contiguous().
513 This option does not affect warning and error messages.
519 A special purpose allocator for storing compressed pages.
520 It is designed to store up to two compressed pages per physical
521 page. While this design limits storage density, it has simple and
522 deterministic reclaim properties that make it preferable to a higher
523 density approach when reclaim will be used.
526 bool "Compressed cache for swap pages (EXPERIMENTAL)"
527 depends on FRONTSWAP && CRYPTO=y
532 A lightweight compressed cache for swap pages. It takes
533 pages that are in the process of being swapped out and attempts to
534 compress them into a dynamically allocated RAM-based memory pool.
535 This can result in a significant I/O reduction on swap device and,
536 in the case where decompressing from RAM is faster that swap device
537 reads, can also improve workload performance.
539 This is marked experimental because it is a new feature (as of
540 v3.11) that interacts heavily with memory reclaim. While these
541 interactions don't cause any known issues on simple memory setups,
542 they have not be fully explored on the large set of potential
543 configurations and workloads that exist.
545 config MEM_SOFT_DIRTY
546 bool "Track memory changes"
547 depends on CHECKPOINT_RESTORE && HAVE_ARCH_SOFT_DIRTY && PROC_FS
548 select PROC_PAGE_MONITOR
550 This option enables memory changes tracking by introducing a
551 soft-dirty bit on pte-s. This bit it set when someone writes
552 into a page just as regular dirty bit, but unlike the latter
553 it can be cleared by hands.
555 See Documentation/vm/soft-dirty.txt for more details.
558 bool "Memory allocator for compressed pages"
562 zsmalloc is a slab-based memory allocator designed to store
563 compressed RAM pages. zsmalloc uses virtual memory mapping
564 in order to reduce fragmentation. However, this results in a
565 non-standard allocator interface where a handle, not a pointer, is
566 returned by an alloc(). This handle must be mapped in order to
567 access the allocated space.
569 config PGTABLE_MAPPING
570 bool "Use page table mapping to access object in zsmalloc"
573 By default, zsmalloc uses a copy-based object mapping method to
574 access allocations that span two pages. However, if a particular
575 architecture (ex, ARM) performs VM mapping faster than copying,
576 then you should select this. This causes zsmalloc to use page table
577 mapping rather than copying for object mapping.
579 You can check speed with zsmalloc benchmark:
580 https://github.com/spartacus06/zsmapbench
582 config GENERIC_EARLY_IOREMAP
585 config MAX_STACK_SIZE_MB
586 int "Maximum user stack size for 32-bit processes (MB)"
590 depends on STACK_GROWSUP && (!64BIT || COMPAT)
592 This is the maximum stack size in Megabytes in the VM layout of 32-bit
593 user processes when the stack grows upwards (currently only on parisc
594 and metag arch). The stack will be located at the highest memory
595 address minus the given value, unless the RLIMIT_STACK hard limit is
596 changed to a smaller value in which case that is used.
598 A sane initial value is 80 MB.