1 Documentation for /proc/sys/fs/* kernel version 2.2.10
2 (c) 1998, 1999, Rik van Riel <riel@nl.linux.org>
3 (c) 2009, Shen Feng<shen@cn.fujitsu.com>
5 For general info and legal blurb, please look in README.
7 ==============================================================
9 This file contains documentation for the sysctl files in
10 /proc/sys/fs/ and is valid for Linux kernel version 2.2.
12 The files in this directory can be used to tune and monitor
13 miscellaneous and general things in the operation of the Linux
14 kernel. Since some of the files _can_ be used to screw up your
15 system, it is advisable to read both documentation and source
16 before actually making adjustments.
19 ----------------------------------------------------------
21 Currently, these files are in /proc/sys/fs:
35 - pipe-user-pages-hard
36 - pipe-user-pages-soft
43 ==============================================================
47 aio-nr is the running total of the number of events specified on the
48 io_setup system call for all currently active aio contexts. If aio-nr
49 reaches aio-max-nr then io_setup will fail with EAGAIN. Note that
50 raising aio-max-nr does not result in the pre-allocation or re-sizing
51 of any kernel data structures.
53 ==============================================================
57 From linux/fs/dentry.c:
58 --------------------------------------------------------------
62 int age_limit; /* age in seconds */
63 int want_pages; /* pages requested by system */
65 } dentry_stat = {0, 0, 45, 0,};
66 --------------------------------------------------------------
68 Dentries are dynamically allocated and deallocated, and
69 nr_dentry seems to be 0 all the time. Hence it's safe to
70 assume that only nr_unused, age_limit and want_pages are
71 used. Nr_unused seems to be exactly what its name says.
72 Age_limit is the age in seconds after which dcache entries
73 can be reclaimed when memory is short and want_pages is
74 nonzero when shrink_dcache_pages() has been called and the
75 dcache isn't pruned yet.
77 ==============================================================
81 The file dquot-max shows the maximum number of cached disk
84 The file dquot-nr shows the number of allocated disk quota
85 entries and the number of free disk quota entries.
87 If the number of free cached disk quotas is very low and
88 you have some awesome number of simultaneous system users,
89 you might want to raise the limit.
91 ==============================================================
95 The value in file-max denotes the maximum number of file-
96 handles that the Linux kernel will allocate. When you get lots
97 of error messages about running out of file handles, you might
98 want to increase this limit.
100 Historically,the kernel was able to allocate file handles
101 dynamically, but not to free them again. The three values in
102 file-nr denote the number of allocated file handles, the number
103 of allocated but unused file handles, and the maximum number of
104 file handles. Linux 2.6 always reports 0 as the number of free
105 file handles -- this is not an error, it just means that the
106 number of allocated file handles exactly matches the number of
109 Attempts to allocate more file descriptors than file-max are
110 reported with printk, look for "VFS: file-max limit <number>
112 ==============================================================
116 This denotes the maximum number of file-handles a process can
117 allocate. Default value is 1024*1024 (1048576) which should be
118 enough for most machines. Actual limit depends on RLIMIT_NOFILE
121 ==============================================================
123 inode-max, inode-nr & inode-state:
125 As with file handles, the kernel allocates the inode structures
126 dynamically, but can't free them yet.
128 The value in inode-max denotes the maximum number of inode
129 handlers. This value should be 3-4 times larger than the value
130 in file-max, since stdin, stdout and network sockets also
131 need an inode struct to handle them. When you regularly run
132 out of inodes, you need to increase this value.
134 The file inode-nr contains the first two items from
135 inode-state, so we'll skip to that file...
137 Inode-state contains three actual numbers and four dummies.
138 The actual numbers are, in order of appearance, nr_inodes,
139 nr_free_inodes and preshrink.
141 Nr_inodes stands for the number of inodes the system has
142 allocated, this can be slightly more than inode-max because
143 Linux allocates them one pageful at a time.
145 Nr_free_inodes represents the number of free inodes (?) and
146 preshrink is nonzero when the nr_inodes > inode-max and the
147 system needs to prune the inode list instead of allocating
150 ==============================================================
152 overflowgid & overflowuid:
154 Some filesystems only support 16-bit UIDs and GIDs, although in Linux
155 UIDs and GIDs are 32 bits. When one of these filesystems is mounted
156 with writes enabled, any UID or GID that would exceed 65535 is translated
157 to a fixed value before being written to disk.
159 These sysctls allow you to change the value of the fixed UID and GID.
160 The default is 65534.
162 ==============================================================
164 pipe-user-pages-hard:
166 Maximum total number of pages a non-privileged user may allocate for pipes.
167 Once this limit is reached, no new pipes may be allocated until usage goes
168 below the limit again. When set to 0, no limit is applied, which is the default
171 ==============================================================
173 pipe-user-pages-soft:
175 Maximum total number of pages a non-privileged user may allocate for pipes
176 before the pipe size gets limited to a single page. Once this limit is reached,
177 new pipes will be limited to a single page in size for this user in order to
178 limit total memory usage, and trying to increase them using fcntl() will be
179 denied until usage goes below the limit again. The default value allows to
180 allocate up to 1024 pipes at their default size. When set to 0, no limit is
183 ==============================================================
187 A long-standing class of security issues is the hardlink-based
188 time-of-check-time-of-use race, most commonly seen in world-writable
189 directories like /tmp. The common method of exploitation of this flaw
190 is to cross privilege boundaries when following a given hardlink (i.e. a
191 root process follows a hardlink created by another user). Additionally,
192 on systems without separated partitions, this stops unauthorized users
193 from "pinning" vulnerable setuid/setgid files against being upgraded by
194 the administrator, or linking to special files.
196 When set to "0", hardlink creation behavior is unrestricted.
198 When set to "1" hardlinks cannot be created by users if they do not
199 already own the source file, or do not have read/write access to it.
201 This protection is based on the restrictions in Openwall and grsecurity.
203 ==============================================================
207 A long-standing class of security issues is the symlink-based
208 time-of-check-time-of-use race, most commonly seen in world-writable
209 directories like /tmp. The common method of exploitation of this flaw
210 is to cross privilege boundaries when following a given symlink (i.e. a
211 root process follows a symlink belonging to another user). For a likely
212 incomplete list of hundreds of examples across the years, please see:
213 http://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=/tmp
215 When set to "0", symlink following behavior is unrestricted.
217 When set to "1" symlinks are permitted to be followed only when outside
218 a sticky world-writable directory, or when the uid of the symlink and
219 follower match, or when the directory owner matches the symlink's owner.
221 This protection is based on the restrictions in Openwall and grsecurity.
223 ==============================================================
227 This value can be used to query and set the core dump mode for setuid
228 or otherwise protected/tainted binaries. The modes are
230 0 - (default) - traditional behaviour. Any process which has changed
231 privilege levels or is execute only will not be dumped.
232 1 - (debug) - all processes dump core when possible. The core dump is
233 owned by the current user and no security is applied. This is
234 intended for system debugging situations only. Ptrace is unchecked.
235 This is insecure as it allows regular users to examine the memory
236 contents of privileged processes.
237 2 - (suidsafe) - any binary which normally would not be dumped is dumped
238 anyway, but only if the "core_pattern" kernel sysctl is set to
239 either a pipe handler or a fully qualified path. (For more details
240 on this limitation, see CVE-2006-2451.) This mode is appropriate
241 when administrators are attempting to debug problems in a normal
242 environment, and either have a core dump pipe handler that knows
243 to treat privileged core dumps with care, or specific directory
244 defined for catching core dumps. If a core dump happens without
245 a pipe handler or fully qualifid path, a message will be emitted
246 to syslog warning about the lack of a correct setting.
248 ==============================================================
250 super-max & super-nr:
252 These numbers control the maximum number of superblocks, and
253 thus the maximum number of mounted filesystems the kernel
254 can have. You only need to increase super-max if you need to
255 mount more filesystems than the current value in super-max
258 ==============================================================
262 aio-nr shows the current system-wide number of asynchronous io
263 requests. aio-max-nr allows you to change the maximum value
266 ==============================================================
270 This denotes the maximum number of mounts that may exist
271 in a mount namespace.
273 ==============================================================
276 2. /proc/sys/fs/binfmt_misc
277 ----------------------------------------------------------
279 Documentation for the files in /proc/sys/fs/binfmt_misc is
280 in Documentation/binfmt_misc.txt.
283 3. /proc/sys/fs/mqueue - POSIX message queues filesystem
284 ----------------------------------------------------------
286 The "mqueue" filesystem provides the necessary kernel features to enable the
287 creation of a user space library that implements the POSIX message queues
288 API (as noted by the MSG tag in the POSIX 1003.1-2001 version of the System
289 Interfaces specification.)
291 The "mqueue" filesystem contains values for determining/setting the amount of
292 resources used by the file system.
294 /proc/sys/fs/mqueue/queues_max is a read/write file for setting/getting the
295 maximum number of message queues allowed on the system.
297 /proc/sys/fs/mqueue/msg_max is a read/write file for setting/getting the
298 maximum number of messages in a queue value. In fact it is the limiting value
299 for another (user) limit which is set in mq_open invocation. This attribute of
300 a queue must be less or equal then msg_max.
302 /proc/sys/fs/mqueue/msgsize_max is a read/write file for setting/getting the
303 maximum message size value (it is every message queue's attribute set during
306 /proc/sys/fs/mqueue/msg_default is a read/write file for setting/getting the
307 default number of messages in a queue value if attr parameter of mq_open(2) is
308 NULL. If it exceed msg_max, the default value is initialized msg_max.
310 /proc/sys/fs/mqueue/msgsize_default is a read/write file for setting/getting
311 the default message size value if attr parameter of mq_open(2) is NULL. If it
312 exceed msgsize_max, the default value is initialized msgsize_max.
314 4. /proc/sys/fs/epoll - Configuration options for the epoll interface
315 --------------------------------------------------------
317 This directory contains configuration options for the epoll(7) interface.
322 Every epoll file descriptor can store a number of files to be monitored
323 for event readiness. Each one of these monitored files constitutes a "watch".
324 This configuration option sets the maximum number of "watches" that are
325 allowed for each user.
326 Each "watch" costs roughly 90 bytes on a 32bit kernel, and roughly 160 bytes
328 The current default value for max_user_watches is the 1/32 of the available
329 low memory, divided for the "watch" cost in bytes.