Problem
When running MongoDB in production, configuring the open file descriptor limit (nofile) is critical. Ubuntu's default limit (1024) is insufficient for production workloads and may result in connection failures or 'Too many open files' errors under high load.
Wherever we are running production workloads like AWS EC2 , Azure VM , GCP Cloud and DigitalOcean it will be same problem
Why Open File Limits Matter MongoDB uses file descriptors for :
- Client connections
- WiredTiger data files
- Journal files
- Log files
- Internal sockets
Recommended Production Value Set the open file limit to 1048576
This is typically the maximum allowed by the Ubuntu kernel and is effectively unlimited for most real-world workloads.
Configuration Steps (systemd)
Create or edit the systemd override file:
sudo systemctl edit mongod or vi /etc/systemd/system/mongod.service.d/override.conf
Add the following configuration:
[Service]
LimitNOFILE=1048576
Reload and restart MongoDB:
sudo systemctl daemon-reexec
sudo systemctl daemon-reload
sudo systemctl restart mongod
Also Change limits configuration ,
vi /etc/security/limits.conf
mongodb soft nofile unlimitedmongodb hard nofile unlimitedroot soft nofile unlimitedroot hard nofile unlimited
cat /proc/$(pidof mongod)/limits | grep "open files"
Expected Output
Linux does not support true unlimited file descriptors. Even when setting LimitNOFILE=infinity, the value is capped by the kernel parameter fs.nr_open.On Ubuntu, this is commonly 1048576.
Final Recommendation
For MongoDB production environments ,
- Set LimitNOFILE to 640000 or 1048576
- Ensure fs.nr_open and fs.file-max are properly configured
- Always verify after restart
0 comments:
Post a Comment