How to solve “Too Many Open Files” error on Linux?

On Linux computers, system resources are shared among users. Try to use more than your fair share and you’ll reach an upper limit. You may also interfere with other users or processes.

Shared system resources

Among its billions of other tasks, the kernel of a Linux computer is always busy monitoring who is using how much of the system’s limited resources, like RAM and CPU cycles. A multi-user system requires constant attention to ensure that people and processes do not use more system resources than necessary.

It’s not fair, for example, for one person to take up so much CPU time that the computer seems slow to everyone else. Even if you are the only person using your Linux computer, there are limits to the resources your processes can use. After all, you are just one user among many.

Some system resources are well known and obvious, such as RAM, CPU cycles, and hard drive space. But there are many, many other resources that are monitored for which each user – or each user-owned process – has a set upper limit. One of them is the number of files a process can have open at the same time.

If you’ve ever seen the “Too many files open” error message in a terminal window or found it in your system logs, it means the upper limit has been reached and the process is not allowed to open other files.

HOW TO USE THE NOHUP COMMAND ON LINUX

It’s not just the files you’ve opened

There is a system-level limit to the number of open files Linux can handle. This is a very large number, as we will see, but there is still a limit. Each user process has an allocation that it can use. They each receive a small share of the system total allocated to them.

What is actually allocated is a number of file handles. Each file that is opened requires a handler. Even with fairly generous, system-wide allocations, file managers can be used faster than you might imagine.

Linux abstracts almost everything to make it appear as a file. Sometimes it’s just simple files. But other actions, like opening a directory, also use a file manager. Linux uses block special files as a sort of driver for hardware devices. Character special files are very similar, but are more often used with devices that have a concept of throughput, such as pipes and serial ports.

Block special files process blocks of data at a time and character special files process each character separately. These two special files can only be accessed by using file handles. Libraries used by a program use a file manager, streams use file managers, and network connections use file managers.

Abstracting all these different requirements to appear as files simplifies interfacing with them and allows things like piping and flows to work.

You can see that behind the scenes Linux opens files and uses file managers just to run itself, disregarding your user processes. The number of open files is not just the number of files you have open. Almost everything in the operating system uses file managers.

File Manager Limitations

The maximum number of system-wide file handlers can be viewed using this command.

cat /proc/sys/fs/file-max

Find system maximum for open files

This command returns a grotesque figure of 9.2 quintillion. This is the theoretical maximum of the system. This is the largest possible value you can fit into a 64-bit signed integer. Whether your poor computer can actually cope with so many files open at once is another matter entirely.

At the user level, there is no explicit value for the maximum number of open files you can have. But it can be roughly assessed. To know the maximum number of files that one of your processes can open, we can use the ulimit command with the -n (open files) option.

ulimit -n

Find the number of files a process can open

And to find the maximum number of processes a user can have, we’ll use ulimit with the -u (user processes) option.

ulimit -u

Find the number of processes a user can have

By multiplying 1024 and 7640, we get 7,823,360. Of course, many of these processes will already be used by your desktop environment and other background processes. This is therefore another theoretical maximum, which you will never realistically reach.

The important number is the number of files a process can open. By default, it is 1024. It should be noted that opening the same file 1024 times simultaneously is equivalent to opening 1024 different files simultaneously. Once you’ve used all of your file managers, you’re done.

1655498020 298 How to solve Too Many Open Files error on

It is possible to adjust the number of files a process can open. There are actually two values ​​to consider when adjusting this number. The first is the current value or the one you are trying to set it to. This is called the soft limit. There is also a hard limit, which is the maximum value to which you can increase the soft limit.

The way to look at it is: the soft limit is actually the “current value” and the upper limit is the highest value that the current value can reach. A normal, non-root user can increase their soft limit to any value up to their hard limit. The root user can increase their hard limit.

To see the current soft and hard limits, use ulimit with the -S (soft) and -H (hard) options, and the -n (open files) option.

ulimit-Sn

ulimit-Hn

Find soft and hard boundary for process filehandles

To create a situation where we can see the soft limit being applied, we created a program that repeatedly opens files until it fails. It then waits for a keystroke before giving up any file managers it has used. This program is called open-files.

./open-Files

The open-files program reaches the soft limit of 1024 files.

It opens 1021 files and fails when it tries to open file 1022.

1024 minus 1021 equals 3. What happened to the other three filehandles? They were used for STDIN, STDOUT, and STDERR streams. They are created automatically for each process. They always have file descriptor values ​​of 0, 1, and 2.

We can see them using the lsof command with the -p (process) option and the process ID of the open-filesprogram program. Practically, it displays its process ID in the terminal window.

lsof -p 11038

The stdin, stdout, and stderr streams and file handles in the lsof command output.

1655498021 905 How to solve Too Many Open Files error on

Of course, in a real situation, you might not know which process just took over all the file managers. To begin your investigation, you can use this sequence of loaded commands. It will tell you the fifteen most prolific users of filehandles on your computer.

lsof | awk ‘ print $1 » » $2; ‘ | sort -rn | only -c | sort -rn | head-15

Showing the processes that use the most file managers

To see more or less entries, adjust the -15 parameter of the head command. Once you identify the process, you need to determine if it has rebelled and is opening too many files because it is out of control, or if it really needs those files. If it needs it, you should increase its file processing limit.

Soft limit increase

If we increase the soft limit and run our program again, we should see that it opens more files. We will use the ulimit command and the -n (open files) option with a numeric value of 2048. This will be the new soft limit.

ulimit -n 2048

Setting a new soft limit for processes

This time we managed to open 2045 files. As expected, that’s three less than 2048, because of the filehandles used for STDIN , STDOUT , and STDERR.

Permanent changes

Increasing the soft limit only affects the current shell. Open a new terminal window and check the soft limit. You will see that this is the old default. But there is a way to globally set a new default for the maximum number of open files a process can have that is persistent and survives restarts.

Outdated advice often recommends that you modify files such as “/etc/sysctl.conf” and “/etc/security/limits.conf”. However, on systemd-based distributions, these changes do not work consistently, especially for graphical login sessions.

The technique shown here is the way to do this on systemd-based distributions. There are two files we need to work with. The first is the “/etc/systemd/system.conf” file. We need to use sudo .

sudo gedit /etc/systemd/system.conf

Editing the system.conf file

Find the line that contains the string “DefaultLimitNOFILE”. Remove the hash “#” from the beginning of the line, and change the first digit to match what you want as the new soft limit for processes. We chose 4096. The second number on this line is the hard limit. We haven’t changed it.

1655498022 173 How to solve Too Many Open Files error on

The DefaultLimitNOFILE value in the system.conf file

Save the file and close the editor.

We must repeat this operation on the “/etc/systemd/user.conf” file.

sudo gedit /etc/systemd/user.conf

Editing the user.conf file

Make the same adjustments on the line containing the string “DefaultLimitNOFILE”.

The DefaultLimitNOFILE value in the user.conf file

Save the file and close the editor. You must either restart your computer or use the systemctl command with the daemon-reexec option to have systemd rerun with the new settings.

sudo systemctl daemon-reexec

Restarting systemd

Opening a terminal window and checking the new limit should show the new value you set. In our case, it was 4096.

ulimit -n

Checking the new soft limit with ulimit -n

We can verify that this is an operational value by rerunning our file processing program.

./open-Files

Checking the new soft limit with the open-files program

The program is unable to open file number 4094, which means that 4093 files have been opened. This is our expected value, 3 less than 4096.
Everything is a file

This is why Linux is so dependent on file managers. Now, if you start to run low, you know how to increase your quota.

We would like to say thanks to the author of this post for this amazing content

How to solve “Too Many Open Files” error on Linux?


Explore our social media profiles as well as other related pageshttps://metfabtech.com/related-pages/