1 Documentation for /proc/sys/vm/* kernel version 2.6.29
2 (c) 1998, 1999, Rik van Riel <riel@nl.linux.org>
3 (c) 2008 Peter W. Morreale <pmorreale@novell.com>
5 For general info and legal blurb, please look in README.
7 ==============================================================
9 This file contains the documentation for the sysctl files in
10 /proc/sys/vm and is valid for Linux kernel version 2.6.29.
12 The files in this directory can be used to tune the operation
13 of the virtual memory (VM) subsystem of the Linux kernel and
14 the writeout of dirty data to disk.
16 Default values and initialization routines for most of these
17 files can be found in mm/swap.c.
19 Currently, these files are in /proc/sys/vm:
21 - admin_reserve_kbytes
24 - compact_unevictable_allowed
25 - dirty_background_bytes
26 - dirty_background_ratio
28 - dirty_expire_centisecs
30 - dirty_writeback_centisecs
34 - hugepages_treat_as_movable
38 - lowmem_reserve_ratio
40 - memory_failure_early_kill
41 - memory_failure_recovery
47 - mmap_rnd_compat_bits
49 - nr_overcommit_hugepages
50 - nr_trim_pages (only if CONFIG_MMU=n)
53 - oom_kill_allocating_task
59 - percpu_pagelist_fraction
66 ==============================================================
70 The amount of free memory in the system that should be reserved for users
71 with the capability cap_sys_admin.
73 admin_reserve_kbytes defaults to min(3% of free pages, 8MB)
75 That should provide enough for the admin to log in and kill a process,
76 if necessary, under the default overcommit 'guess' mode.
78 Systems running under overcommit 'never' should increase this to account
79 for the full Virtual Memory Size of programs used to recover. Otherwise,
80 root may not be able to log in to recover the system.
82 How do you calculate a minimum useful reserve?
84 sshd or login + bash (or some other shell) + top (or ps, kill, etc.)
86 For overcommit 'guess', we can sum resident set sizes (RSS).
87 On x86_64 this is about 8MB.
89 For overcommit 'never', we can take the max of their virtual sizes (VSZ)
90 and add the sum of their RSS.
91 On x86_64 this is about 128MB.
93 Changing this takes effect whenever an application requests memory.
95 ==============================================================
99 block_dump enables block I/O debugging when set to a nonzero value. More
100 information on block I/O debugging is in Documentation/laptops/laptop-mode.txt.
102 ==============================================================
106 Available only when CONFIG_COMPACTION is set. When 1 is written to the file,
107 all zones are compacted such that free memory is available in contiguous
108 blocks where possible. This can be important for example in the allocation of
109 huge pages although processes will also directly compact memory as required.
111 ==============================================================
113 compact_unevictable_allowed
115 Available only when CONFIG_COMPACTION is set. When set to 1, compaction is
116 allowed to examine the unevictable lru (mlocked pages) for pages to compact.
117 This should be used on systems where stalls for minor page faults are an
118 acceptable trade for large contiguous free memory. Set to 0 to prevent
119 compaction from moving pages that are unevictable. Default value is 1.
121 ==============================================================
123 dirty_background_bytes
125 Contains the amount of dirty memory at which the background kernel
126 flusher threads will start writeback.
128 Note: dirty_background_bytes is the counterpart of dirty_background_ratio. Only
129 one of them may be specified at a time. When one sysctl is written it is
130 immediately taken into account to evaluate the dirty memory limits and the
131 other appears as 0 when read.
133 ==============================================================
135 dirty_background_ratio
137 Contains, as a percentage of total available memory that contains free pages
138 and reclaimable pages, the number of pages at which the background kernel
139 flusher threads will start writing out dirty data.
141 The total avaiable memory is not equal to total system memory.
143 ==============================================================
147 Contains the amount of dirty memory at which a process generating disk writes
148 will itself start writeback.
150 Note: dirty_bytes is the counterpart of dirty_ratio. Only one of them may be
151 specified at a time. When one sysctl is written it is immediately taken into
152 account to evaluate the dirty memory limits and the other appears as 0 when
155 Note: the minimum value allowed for dirty_bytes is two pages (in bytes); any
156 value lower than this limit will be ignored and the old configuration will be
159 ==============================================================
161 dirty_expire_centisecs
163 This tunable is used to define when dirty data is old enough to be eligible
164 for writeout by the kernel flusher threads. It is expressed in 100'ths
165 of a second. Data which has been dirty in-memory for longer than this
166 interval will be written out next time a flusher thread wakes up.
168 ==============================================================
172 Contains, as a percentage of total available memory that contains free pages
173 and reclaimable pages, the number of pages at which a process which is
174 generating disk writes will itself start writing out dirty data.
176 The total avaiable memory is not equal to total system memory.
178 ==============================================================
180 dirty_writeback_centisecs
182 The kernel flusher threads will periodically wake up and write `old' data
183 out to disk. This tunable expresses the interval between those wakeups, in
186 Setting this to zero disables periodic writeback altogether.
188 ==============================================================
192 Writing to this will cause the kernel to drop clean caches, as well as
193 reclaimable slab objects like dentries and inodes. Once dropped, their
197 echo 1 > /proc/sys/vm/drop_caches
198 To free reclaimable slab objects (includes dentries and inodes):
199 echo 2 > /proc/sys/vm/drop_caches
200 To free slab objects and pagecache:
201 echo 3 > /proc/sys/vm/drop_caches
203 This is a non-destructive operation and will not free any dirty objects.
204 To increase the number of objects freed by this operation, the user may run
205 `sync' prior to writing to /proc/sys/vm/drop_caches. This will minimize the
206 number of dirty objects on the system and create more candidates to be
209 This file is not a means to control the growth of the various kernel caches
210 (inodes, dentries, pagecache, etc...) These objects are automatically
211 reclaimed by the kernel when memory is needed elsewhere on the system.
213 Use of this file can cause performance problems. Since it discards cached
214 objects, it may cost a significant amount of I/O and CPU to recreate the
215 dropped objects, especially if they were under heavy use. Because of this,
216 use outside of a testing or debugging environment is not recommended.
218 You may see informational messages in your kernel log when this file is
221 cat (1234): drop_caches: 3
223 These are informational only. They do not mean that anything is wrong
224 with your system. To disable them, echo 4 (bit 3) into drop_caches.
226 ==============================================================
230 This parameter affects whether the kernel will compact memory or direct
231 reclaim to satisfy a high-order allocation. The extfrag/extfrag_index file in
232 debugfs shows what the fragmentation index for each order is in each zone in
233 the system. Values tending towards 0 imply allocations would fail due to lack
234 of memory, values towards 1000 imply failures are due to fragmentation and -1
235 implies that the allocation will succeed as long as watermarks are met.
237 The kernel will not compact memory in a zone if the
238 fragmentation index is <= extfrag_threshold. The default value is 500.
240 ==============================================================
244 This parameter tells the VM to keep extra free memory between the threshold
245 where background reclaim (kswapd) kicks in, and the threshold where direct
246 reclaim (by allocating processes) kicks in.
248 This is useful for workloads that require low latency memory allocations
249 and have a bounded burstiness in memory allocations, for example a
250 realtime application that receives and transmits network traffic
251 (causing in-kernel memory allocations) with a maximum total message burst
252 size of 200MB may need 200MB of extra free memory to avoid direct reclaim
255 ==============================================================
257 hugepages_treat_as_movable
259 This parameter controls whether we can allocate hugepages from ZONE_MOVABLE
260 or not. If set to non-zero, hugepages can be allocated from ZONE_MOVABLE.
261 ZONE_MOVABLE is created when kernel boot parameter kernelcore= is specified,
262 so this parameter has no effect if used without kernelcore=.
264 Hugepage migration is now available in some situations which depend on the
265 architecture and/or the hugepage size. If a hugepage supports migration,
266 allocation from ZONE_MOVABLE is always enabled for the hugepage regardless
267 of the value of this parameter.
268 IOW, this parameter affects only non-migratable hugepages.
270 Assuming that hugepages are not migratable in your system, one usecase of
271 this parameter is that users can make hugepage pool more extensible by
272 enabling the allocation from ZONE_MOVABLE. This is because on ZONE_MOVABLE
273 page reclaim/migration/compaction work more and you can get contiguous
274 memory more likely. Note that using ZONE_MOVABLE for non-migratable
275 hugepages can do harm to other features like memory hotremove (because
276 memory hotremove expects that memory blocks on ZONE_MOVABLE are always
277 removable,) so it's a trade-off responsible for the users.
279 ==============================================================
283 hugetlb_shm_group contains group id that is allowed to create SysV
284 shared memory segment using hugetlb page.
286 ==============================================================
290 laptop_mode is a knob that controls "laptop mode". All the things that are
291 controlled by this knob are discussed in Documentation/laptops/laptop-mode.txt.
293 ==============================================================
297 If non-zero, this sysctl disables the new 32-bit mmap layout - the kernel
298 will use the legacy (2.4) layout for all processes.
300 ==============================================================
304 For some specialised workloads on highmem machines it is dangerous for
305 the kernel to allow process memory to be allocated from the "lowmem"
306 zone. This is because that memory could then be pinned via the mlock()
307 system call, or by unavailability of swapspace.
309 And on large highmem machines this lack of reclaimable lowmem memory
312 So the Linux page allocator has a mechanism which prevents allocations
313 which _could_ use highmem from using too much lowmem. This means that
314 a certain amount of lowmem is defended from the possibility of being
315 captured into pinned user memory.
317 (The same argument applies to the old 16 megabyte ISA DMA region. This
318 mechanism will also defend that region from allocations which could use
321 The `lowmem_reserve_ratio' tunable determines how aggressive the kernel is
322 in defending these lower zones.
324 If you have a machine which uses highmem or ISA DMA and your
325 applications are using mlock(), or if you are running with no swap then
326 you probably should change the lowmem_reserve_ratio setting.
328 The lowmem_reserve_ratio is an array. You can see them by reading this file.
330 % cat /proc/sys/vm/lowmem_reserve_ratio
333 Note: # of this elements is one fewer than number of zones. Because the highest
334 zone's value is not necessary for following calculation.
336 But, these values are not used directly. The kernel calculates # of protection
337 pages for each zones from them. These are shown as array of protection pages
338 in /proc/zoneinfo like followings. (This is an example of x86-64 box).
339 Each zone has an array of protection pages like this.
350 protection: (0, 2004, 2004, 2004)
351 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
356 These protections are added to score to judge whether this zone should be used
357 for page allocation or should be reclaimed.
359 In this example, if normal pages (index=2) are required to this DMA zone and
360 watermark[WMARK_HIGH] is used for watermark, the kernel judges this zone should
361 not be used because pages_free(1355) is smaller than watermark + protection[2]
362 (4 + 2004 = 2008). If this protection value is 0, this zone would be used for
363 normal page requirement. If requirement is DMA zone(index=0), protection[0]
366 zone[i]'s protection[j] is calculated by following expression.
369 zone[i]->protection[j]
370 = (total sums of managed_pages from zone[i+1] to zone[j] on the node)
371 / lowmem_reserve_ratio[i];
373 (should not be protected. = 0;
375 (not necessary, but looks 0)
377 The default values of lowmem_reserve_ratio[i] are
378 256 (if zone[i] means DMA or DMA32 zone)
380 As above expression, they are reciprocal number of ratio.
381 256 means 1/256. # of protection pages becomes about "0.39%" of total managed
382 pages of higher zones on the node.
384 If you would like to protect more pages, smaller values are effective.
385 The minimum value is 1 (1/1 -> 100%).
387 ==============================================================
391 This file contains the maximum number of memory map areas a process
392 may have. Memory map areas are used as a side-effect of calling
393 malloc, directly by mmap and mprotect, and also when loading shared
396 While most applications need less than a thousand maps, certain
397 programs, particularly malloc debuggers, may consume lots of them,
398 e.g., up to one or two maps per allocation.
400 The default value is 65536.
402 =============================================================
404 memory_failure_early_kill:
406 Control how to kill processes when uncorrected memory error (typically
407 a 2bit error in a memory module) is detected in the background by hardware
408 that cannot be handled by the kernel. In some cases (like the page
409 still having a valid copy on disk) the kernel will handle the failure
410 transparently without affecting any applications. But if there is
411 no other uptodate copy of the data it will kill to prevent any data
412 corruptions from propagating.
414 1: Kill all processes that have the corrupted and not reloadable page mapped
415 as soon as the corruption is detected. Note this is not supported
416 for a few types of pages, like kernel internally allocated data or
417 the swap cache, but works for the majority of user pages.
419 0: Only unmap the corrupted page from all processes and only kill a process
420 who tries to access it.
422 The kill is done using a catchable SIGBUS with BUS_MCEERR_AO, so processes can
423 handle this if they want to.
425 This is only active on architectures/platforms with advanced machine
426 check handling and depends on the hardware capabilities.
428 Applications can override this setting individually with the PR_MCE_KILL prctl
430 ==============================================================
432 memory_failure_recovery
434 Enable memory failure recovery (when supported by the platform)
438 0: Always panic on a memory failure.
440 ==============================================================
444 This is used to force the Linux VM to keep a minimum number
445 of kilobytes free. The VM uses this number to compute a
446 watermark[WMARK_MIN] value for each lowmem zone in the system.
447 Each lowmem zone gets a number of reserved free pages based
448 proportionally on its size.
450 Some minimal amount of memory is needed to satisfy PF_MEMALLOC
451 allocations; if you set this to lower than 1024KB, your system will
452 become subtly broken, and prone to deadlock under high loads.
454 Setting this too high will OOM your machine instantly.
456 =============================================================
460 This is available only on NUMA kernels.
462 A percentage of the total pages in each zone. On Zone reclaim
463 (fallback from the local zone occurs) slabs will be reclaimed if more
464 than this percentage of pages in a zone are reclaimable slab pages.
465 This insures that the slab growth stays under control even in NUMA
466 systems that rarely perform global reclaim.
468 The default is 5 percent.
470 Note that slab reclaim is triggered in a per zone / node fashion.
471 The process of reclaiming slab memory is currently not node specific
474 =============================================================
478 This is available only on NUMA kernels.
480 This is a percentage of the total pages in each zone. Zone reclaim will
481 only occur if more than this percentage of pages are in a state that
482 zone_reclaim_mode allows to be reclaimed.
484 If zone_reclaim_mode has the value 4 OR'd, then the percentage is compared
485 against all file-backed unmapped pages including swapcache pages and tmpfs
486 files. Otherwise, only unmapped pages backed by normal files but not tmpfs
487 files and similar are considered.
489 The default is 1 percent.
491 ==============================================================
495 This file indicates the amount of address space which a user process will
496 be restricted from mmapping. Since kernel null dereference bugs could
497 accidentally operate based on the information in the first couple of pages
498 of memory userspace processes should not be allowed to write to them. By
499 default this value is set to 0 and no protections will be enforced by the
500 security module. Setting this value to something like 64k will allow the
501 vast majority of applications to work correctly and provide defense in depth
502 against future potential kernel bugs.
504 ==============================================================
508 This value can be used to select the number of bits to use to
509 determine the random offset to the base address of vma regions
510 resulting from mmap allocations on architectures which support
511 tuning address space randomization. This value will be bounded
512 by the architecture's minimum and maximum supported values.
514 This value can be changed after boot using the
515 /proc/sys/vm/mmap_rnd_bits tunable
517 ==============================================================
519 mmap_rnd_compat_bits:
521 This value can be used to select the number of bits to use to
522 determine the random offset to the base address of vma regions
523 resulting from mmap allocations for applications run in
524 compatibility mode on architectures which support tuning address
525 space randomization. This value will be bounded by the
526 architecture's minimum and maximum supported values.
528 This value can be changed after boot using the
529 /proc/sys/vm/mmap_rnd_compat_bits tunable
531 ==============================================================
535 Change the minimum size of the hugepage pool.
537 See Documentation/vm/hugetlbpage.txt
539 ==============================================================
541 nr_overcommit_hugepages
543 Change the maximum size of the hugepage pool. The maximum is
544 nr_hugepages + nr_overcommit_hugepages.
546 See Documentation/vm/hugetlbpage.txt
548 ==============================================================
552 This is available only on NOMMU kernels.
554 This value adjusts the excess page trimming behaviour of power-of-2 aligned
555 NOMMU mmap allocations.
557 A value of 0 disables trimming of allocations entirely, while a value of 1
558 trims excess pages aggressively. Any value >= 1 acts as the watermark where
559 trimming of allocations is initiated.
561 The default value is 1.
563 See Documentation/nommu-mmap.txt for more information.
565 ==============================================================
569 This sysctl is only for NUMA.
570 'where the memory is allocated from' is controlled by zonelists.
571 (This documentation ignores ZONE_HIGHMEM/ZONE_DMA32 for simple explanation.
572 you may be able to read ZONE_DMA as ZONE_DMA32...)
574 In non-NUMA case, a zonelist for GFP_KERNEL is ordered as following.
575 ZONE_NORMAL -> ZONE_DMA
576 This means that a memory allocation request for GFP_KERNEL will
577 get memory from ZONE_DMA only when ZONE_NORMAL is not available.
579 In NUMA case, you can think of following 2 types of order.
580 Assume 2 node NUMA and below is zonelist of Node(0)'s GFP_KERNEL
582 (A) Node(0) ZONE_NORMAL -> Node(0) ZONE_DMA -> Node(1) ZONE_NORMAL
583 (B) Node(0) ZONE_NORMAL -> Node(1) ZONE_NORMAL -> Node(0) ZONE_DMA.
585 Type(A) offers the best locality for processes on Node(0), but ZONE_DMA
586 will be used before ZONE_NORMAL exhaustion. This increases possibility of
587 out-of-memory(OOM) of ZONE_DMA because ZONE_DMA is tend to be small.
589 Type(B) cannot offer the best locality but is more robust against OOM of
592 Type(A) is called as "Node" order. Type (B) is "Zone" order.
594 "Node order" orders the zonelists by node, then by zone within each node.
595 Specify "[Nn]ode" for node order
597 "Zone Order" orders the zonelists by zone type, then by node within each
598 zone. Specify "[Zz]one" for zone order.
600 Specify "[Dd]efault" to request automatic configuration. Autoconfiguration
601 will select "node" order in following case.
602 (1) if the DMA zone does not exist or
603 (2) if the DMA zone comprises greater than 50% of the available memory or
604 (3) if any node's DMA zone comprises greater than 70% of its local memory and
605 the amount of local memory is big enough.
607 Otherwise, "zone" order will be selected. Default order is recommended unless
608 this is causing problems for your system/application.
610 ==============================================================
614 Enables a system-wide task dump (excluding kernel threads) to be produced
615 when the kernel performs an OOM-killing and includes such information as
616 pid, uid, tgid, vm size, rss, nr_ptes, nr_pmds, swapents, oom_score_adj
617 score, and name. This is helpful to determine why the OOM killer was
618 invoked, to identify the rogue task that caused it, and to determine why
619 the OOM killer chose the task it did to kill.
621 If this is set to zero, this information is suppressed. On very
622 large systems with thousands of tasks it may not be feasible to dump
623 the memory state information for each one. Such systems should not
624 be forced to incur a performance penalty in OOM conditions when the
625 information may not be desired.
627 If this is set to non-zero, this information is shown whenever the
628 OOM killer actually kills a memory-hogging task.
630 The default value is 1 (enabled).
632 ==============================================================
634 oom_kill_allocating_task
636 This enables or disables killing the OOM-triggering task in
637 out-of-memory situations.
639 If this is set to zero, the OOM killer will scan through the entire
640 tasklist and select a task based on heuristics to kill. This normally
641 selects a rogue memory-hogging task that frees up a large amount of
644 If this is set to non-zero, the OOM killer simply kills the task that
645 triggered the out-of-memory condition. This avoids the expensive
648 If panic_on_oom is selected, it takes precedence over whatever value
649 is used in oom_kill_allocating_task.
651 The default value is 0.
653 ==============================================================
657 When overcommit_memory is set to 2, the committed address space is not
658 permitted to exceed swap plus this amount of physical RAM. See below.
660 Note: overcommit_kbytes is the counterpart of overcommit_ratio. Only one
661 of them may be specified at a time. Setting one disables the other (which
662 then appears as 0 when read).
664 ==============================================================
668 This value contains a flag that enables memory overcommitment.
670 When this flag is 0, the kernel attempts to estimate the amount
671 of free memory left when userspace requests more memory.
673 When this flag is 1, the kernel pretends there is always enough
674 memory until it actually runs out.
676 When this flag is 2, the kernel uses a "never overcommit"
677 policy that attempts to prevent any overcommit of memory.
678 Note that user_reserve_kbytes affects this policy.
680 This feature can be very useful because there are a lot of
681 programs that malloc() huge amounts of memory "just-in-case"
682 and don't use much of it.
684 The default value is 0.
686 See Documentation/vm/overcommit-accounting and
687 mm/mmap.c::__vm_enough_memory() for more information.
689 ==============================================================
693 When overcommit_memory is set to 2, the committed address
694 space is not permitted to exceed swap plus this percentage
695 of physical RAM. See above.
697 ==============================================================
701 page-cluster controls the number of pages up to which consecutive pages
702 are read in from swap in a single attempt. This is the swap counterpart
703 to page cache readahead.
704 The mentioned consecutivity is not in terms of virtual/physical addresses,
705 but consecutive on swap space - that means they were swapped out together.
707 It is a logarithmic value - setting it to zero means "1 page", setting
708 it to 1 means "2 pages", setting it to 2 means "4 pages", etc.
709 Zero disables swap readahead completely.
711 The default value is three (eight pages at a time). There may be some
712 small benefits in tuning this to a different value if your workload is
715 Lower values mean lower latencies for initial faults, but at the same time
716 extra faults and I/O delays for following faults if they would have been part of
717 that consecutive pages readahead would have brought in.
719 =============================================================
723 This enables or disables panic on out-of-memory feature.
725 If this is set to 0, the kernel will kill some rogue process,
726 called oom_killer. Usually, oom_killer can kill rogue processes and
729 If this is set to 1, the kernel panics when out-of-memory happens.
730 However, if a process limits using nodes by mempolicy/cpusets,
731 and those nodes become memory exhaustion status, one process
732 may be killed by oom-killer. No panic occurs in this case.
733 Because other nodes' memory may be free. This means system total status
734 may be not fatal yet.
736 If this is set to 2, the kernel panics compulsorily even on the
737 above-mentioned. Even oom happens under memory cgroup, the whole
740 The default value is 0.
741 1 and 2 are for failover of clustering. Please select either
742 according to your policy of failover.
743 panic_on_oom=2+kdump gives you very strong tool to investigate
744 why oom happens. You can get snapshot.
746 =============================================================
748 percpu_pagelist_fraction
750 This is the fraction of pages at most (high mark pcp->high) in each zone that
751 are allocated for each per cpu page list. The min value for this is 8. It
752 means that we don't allow more than 1/8th of pages in each zone to be
753 allocated in any single per_cpu_pagelist. This entry only changes the value
754 of hot per cpu pagelists. User can specify a number like 100 to allocate
755 1/100th of each zone to each per cpu page list.
757 The batch value of each per cpu pagelist is also updated as a result. It is
758 set to pcp->high/4. The upper limit of batch is (PAGE_SHIFT * 8)
760 The initial value is zero. Kernel does not use this value at boot time to set
761 the high water marks for each per cpu page list. If the user writes '0' to this
762 sysctl, it will revert to this default behavior.
764 ==============================================================
768 The time interval between which vm statistics are updated. The default
771 ==============================================================
775 This control is used to define how aggressive the kernel will swap
776 memory pages. Higher values will increase agressiveness, lower values
777 decrease the amount of swap. A value of 0 instructs the kernel not to
778 initiate swap until the amount of free and file-backed pages is less
779 than the high water mark in a zone.
781 The default value is 60.
783 ==============================================================
785 - user_reserve_kbytes
787 When overcommit_memory is set to 2, "never overcommit" mode, reserve
788 min(3% of current process size, user_reserve_kbytes) of free memory.
789 This is intended to prevent a user from starting a single memory hogging
790 process, such that they cannot recover (kill the hog).
792 user_reserve_kbytes defaults to min(3% of the current process size, 128MB).
794 If this is reduced to zero, then the user will be allowed to allocate
795 all free memory with a single process, minus admin_reserve_kbytes.
796 Any subsequent attempts to execute a command will result in
797 "fork: Cannot allocate memory".
799 Changing this takes effect whenever an application requests memory.
801 ==============================================================
806 This percentage value controls the tendency of the kernel to reclaim
807 the memory which is used for caching of directory and inode objects.
809 At the default value of vfs_cache_pressure=100 the kernel will attempt to
810 reclaim dentries and inodes at a "fair" rate with respect to pagecache and
811 swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer
812 to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will
813 never reclaim dentries and inodes due to memory pressure and this can easily
814 lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
815 causes the kernel to prefer to reclaim dentries and inodes.
817 Increasing vfs_cache_pressure significantly beyond 100 may have negative
818 performance impact. Reclaim code needs to take various locks to find freeable
819 directory and inode objects. With vfs_cache_pressure=1000, it will look for
820 ten times more freeable objects than there are.
822 ==============================================================
826 Zone_reclaim_mode allows someone to set more or less aggressive approaches to
827 reclaim memory when a zone runs out of memory. If it is set to zero then no
828 zone reclaim occurs. Allocations will be satisfied from other zones / nodes
831 This is value ORed together of
834 2 = Zone reclaim writes dirty pages out
835 4 = Zone reclaim swaps pages
837 zone_reclaim_mode is disabled by default. For file servers or workloads
838 that benefit from having their data cached, zone_reclaim_mode should be
839 left disabled as the caching effect is likely to be more important than
842 zone_reclaim may be enabled if it's known that the workload is partitioned
843 such that each partition fits within a NUMA node and that accessing remote
844 memory would cause a measurable performance reduction. The page allocator
845 will then reclaim easily reusable pages (those page cache pages that are
846 currently not used) before allocating off node pages.
848 Allowing zone reclaim to write out pages stops processes that are
849 writing large amounts of data from dirtying pages on other nodes. Zone
850 reclaim will write out dirty pages if a zone fills up and so effectively
851 throttle the process. This may decrease the performance of a single process
852 since it cannot use all of system memory to buffer the outgoing writes
853 anymore but it preserve the memory on other nodes so that the performance
854 of other processes running on other nodes will not be affected.
856 Allowing regular swap effectively restricts allocations to the local
857 node unless explicitly overridden by memory policies or cpuset
860 ============ End of Document =================================