Categories: AWSMongoDBNoSQL

MongoDB – Top 10 Best Practices for AWS Production Deployments

The following are some of the best practices which should be considered for your MongoDB production deployments on AWS.

  • File system: MongoDB recommends using either of XFS or EXT4 filesystem for greater performance. With WiredTiger storage engine, it is strongly recommended to go with XFS file system. Refer this page, MongoDB production notes for finer details.
  • AWS EBS (Elastic Block Store) Configuration
    • EBS-optimized instances: It is advisable to use EBS optimized instances to host MongoDB database. With EBS optimized instances, there are separate network interfaces for database, and, other traffic (application traffic). In case, the replica set is configured with ephemeral storage, at least one of the secondaries in the replica set should use EBS volumnes as an assurance of data persistence.
    • Seperate EBS volumes should be used for storing data, logs and journal for improved performance. This helps in avoiding IO contention. By using separate storage devices for the data, journal and log files, the overall throughput of the disk subsystem can be increased.
    • Provisioned IOPs (PIOPs): Use provisioned IOPS to achieve consistent EBS performance.
    • EBS volumes should be provisioned to match the write load of the primary or else they may fall behind in the replication.
  • Read Ahead Limit: Check the disk read ahead settings on AWS EC2. It may not be optimized for MongoDB. Set the readahead setting to 0 regardless of storage media type (spinning, SSD, etc.). Setting a higher readahead limit benefits sequential I/O operations. However, since MongoDB disk access patterns are generally random, setting a higher readahead value provides limited benefit or performance degradation. As such, for most workloads, a readahead of 0 provides optimal MongoDB performance. For further details, read the page, MongoDB 3.4 Production Notes. That said, higher read ahead value such as 32 blocks (or 16 KB) of data should also be tested to validate whether there is a measurable, repeatable, and reliable benefit with this value.
  • ULimit: The value of ulimit is one of the mechanisms used by unix OS such as Linux to prevent single user from using too many system resources such as files, threads, network connections etc. By default, the value of ulimit for nofile (no. of open files) and processess/threads (proc) is set to be low. With the lower value of ulimit, it would create issues in the course of normal MongoDB operation as mongod and mongos use threads and file descriptors to track connections and manage internal operations. There are different ways in which ulimit value can be set. Related details can be found on this page, Unix ulimit settings. Recommended values for ulimit (nofile) is 64000 (soft limit)/64000 (hard limit) and, ulimit (nproc) is 64000 (soft limit), 64000 (hard limit).
  • TCP KeepAlive: At times, the socket related errors between members of replica set or sharded cluster can be attributed to non-optimal value of TCP Keepalive. MongoDB recommends setting the keepalive value to 120 seconds (2 minutes). A common keepalive value is 7200 sec (2 hours). For Linux, values greater than 300 seconds (5 minutes) is overridden on mongod and mongos sockets with a maximum of 300 seconds. Related details can be found on this page, MongoDB Diagnostics FAQs.
  • Transparent Huge Pages: It is recommended to disable transparent huge pages to ensure best performance with MongoDB. Instructions on how to disable huge pages can be found on this page, Disable THP. Huge pages is one of the mechanisms of managing large amount of memory by enabling pages (block of memory) of sizes such as 2MB or 1GB. With 4 KB pages (block of memory), it is difficult for CPU to manage the memory using memory management unit (MMU). Note that these pages are referenced using page table entries. Thus, 1 GB of memory would require management of 256,000 entries in the page table. Large memory sizes would even need larger page tables. However, hardware memory management unit in a modern processor only supports hundreds or thousands of page table entries. Additionally, hardware and memory management algorithms that work well with thousands of pages (megabytes of memory) may have difficulty performing well with millions (or even billions) of pages. This is where huge pages come into picture. Transparent Huge Pages is an abstraction layer which automates different aspects related with creating, managing, and using huge pages. In other words, Transparent Huge Pages (THP) is a Linux memory management system that reduces the overhead of Translation Lookaside Buffer (TLB) lookups on machines with large amounts of memory by using larger memory pages. Database workloads perform poorly with THP enabled.
  • Access time settings: Most filesystems update the last access time when files are modified. When MongoDB performs frequent writes to the filesystem, this will result in unnecessary overhead and performance degradation. Thus, MongoDB recommends disabling access time settings. his feature can be disabled by editing fstab file.
  • Log rotation: Log rotation mechanism needs to be put in place. Related details can be found on this page, Rotate Log Files.
  • RAID10: MongoDB recommends using RAID-10 for production deployments. It can, however, turn out to be expensive value proposition to use RAID-10 along with PIOPs on AWS. Thus, one may do appropriate due diligence before going for adoption of RAID10.
  • Indexes on Seperate Storage Device: MongoDB recommends usage of separate storage devices for storing indexes when using WiredTiger as storage engine. Read greater details on this page, storage.wiredTiger.engineConfig.directoryForIndexes.
Ajitesh Kumar

I have been recently working in the area of Data analytics including Data Science and Machine Learning / Deep Learning. I am also passionate about different technologies including programming languages such as Java/JEE, Javascript, Python, R, Julia, etc, and technologies such as Blockchain, mobile computing, cloud-native technologies, application security, cloud computing platforms, big data, etc. I would love to connect with you on Linkedin. Check out my latest book titled as First Principles Thinking: Building winning products using first principles thinking.

Recent Posts

Agentic Reasoning Design Patterns in AI: Examples

In recent years, artificial intelligence (AI) has evolved to include more sophisticated and capable agents,…

2 months ago

LLMs for Adaptive Learning & Personalized Education

Adaptive learning helps in tailoring learning experiences to fit the unique needs of each student.…

2 months ago

Sparse Mixture of Experts (MoE) Models: Examples

With the increasing demand for more powerful machine learning (ML) systems that can handle diverse…

3 months ago

Anxiety Disorder Detection & Machine Learning Techniques

Anxiety is a common mental health condition that affects millions of people around the world.…

3 months ago

Confounder Features & Machine Learning Models: Examples

In machine learning, confounder features or variables can significantly affect the accuracy and validity of…

3 months ago

Credit Card Fraud Detection & Machine Learning

Last updated: 26 Sept, 2024 Credit card fraud detection is a major concern for credit…

3 months ago