Friday, August 2, 2013

Linux: "File size limit exceeded" or "Too many open files in system"

When running my benchmark, I have run into the following exception:
  • java.io.FileNotFoundException

Too many open files in system


The above exception is caused by:
  • Too many open files in system
as found in the MyServer_1-diagnostic.log

[2013-06-27T15:40:03.611-07:00] [CRMCommonServer_1] [ERROR] [] [oracle.security.audit.ajl.loader.AuditLoaderManager] [tid: AuditLoaderRunner] [ecid: 0000Jy7mm0L7y0I_IpG7yf1Hn9FC0001na,0] IAU:IAU-5046: Stopping AuditLoader, caught exception: oracle.security.audit.AuditException: java.io.FileNotFoundException: /slot/.../MyDomain/servers/myserver_1/logs/iau/state/auditloader.state (Too many open files in system)[[
        at oracle.security.audit.service.AuditLoaderManager.readMessages(AuditLoaderManager.java:276)
        at oracle.security.audit.service.AuditLoaderManager$Runner.run(AuditLoaderManager.java:335)
Caused by: java.io.FileNotFoundException: /slot/.../MyDomain/servers/MyServer_1/logs/iau/state/auditloader.state (Too many open files in system)
        at java.io.FileOutputStream.open(Native Method)
        at java.io.FileOutputStream.(FileOutputStream.java:194)
        at java.io.FileOutputStream.(FileOutputStream.java:145)
        at java.io.FileWriter.(FileWriter.java:73)
        at oracle.security.audit.ajl.loader.AuditLoader.saveState(AuditLoader.java:213)
        at oracle.security.audit.service.AuditLoaderManager.readMessages(AuditLoaderManager.java:262)
        ... 1 more

User Level File Descriptor Limits



To view current open file limit for the current Linux user, run command:

$ulimit -n
8192

To set it to a new value for this running session, which takes effect immediately, run command:

$ ulimit -n 16384


Alternatively, if you want the changes to survive reboot, do the following:
  1. Exit all shell sessions for the user you want to change limits on.
  2. As root, edit the file /etc/security/limits.conf and add these two lines toward the end:
    • user1 soft nofile 16384
      user1 hard nofile 16384
The two lines above changes the max number of file handles - nofile - to new settings.
  • Save the file.
  • Login as the user1 again. The new changes will be in effect.

System-wide File Descriptors Limits


On Linux, there is also a system-wide configuration parameter named:
  • fs.file_max
Use the following command to display maximum number of open file descriptors allowed on the system:

$cat /proc/sys/fs/file-max
100000

Many application such as Oracle database or WebLogic server needs this setting quite higher. So you can increase the maximum number of open files by setting a new value in kernel variable /proc/sys/fs/file-max as follows (login as the root):

# sysctl -w fs.file-max=262144

Above command forces the limit to 262144 files. You need to edit /etc/sysctl.conf file and put following line so that after reboot the setting will remain as it is:

# vi /etc/sysctl.conf

Append a configuration directive as follows:

fs.file-max = 262144

Save and close the file. Users need to log out and log back in again to changes take effect or just type the following command:

# sysctl -p

Verify your settings with command:

# cat /proc/sys/fs/file-max

OR

# sysctl fs.file-max

Final Words


Note that commands used in this article are good for the following Linux release:

$ cat /etc/*-release
Enterprise Linux Enterprise Linux Server release 5.8 (Carthage)
Oracle Linux Server release 5.8
Red Hat Enterprise Linux Server release 5.8 (Tikanga)


References

  1. Need to “calculate” optimum ulimit and fs.file-max values according to my own server needs
  2. Verifying Kernel Parameters
  3. Linux Increase The Maximum Number Of Open Files / File Descriptors (FD)