In no time you can create a fully functional analytics service from data collection to dashboard presentation.
But what happens at scale? How can make sure this blazing fast solution keeps serving your business team even when your data includes hundreds of millions of data points and more.
What to Focus on?
Elasticsearch performs two major tasks:
- Data load and indexing which is CPU intensive.
- Search and queries that is Memory intensive.
Step 1: Keep your version up to date
Elasticsearch is a relatively young tool, and the team delivers new features and fixes in a rapid way, so make sure you keep with the latest versions.
Step 2: Tune Your Memory
Elasticsearch memory utilization should be about 50% of your machine. It should be configured using the $ES_HEAP_SIZE environment variable to this number (2G for example): export ES_HEAP_SIZE=2G
Note: Probably this method should not work, as the init.d script overrides it... edit your /etc/init.d/elasticsearch with the ES_HEAP_SIZE=2g parameter.
Step 3: Select Your Storage
Disks are crucial when your data is larger then your memory. Choose local SSD disks. They will cost less and perform better.
Step 4: Stripe Your Data
Use path.data and path.logs to stripe your data and logs on multiple disks to gain more IOPS.
Step 5: Prepare for Index Merging:
Index merging is probably the most frustrating process in Elasticsearch. It's required to keep your system performance in the long run, but can end in relatively short high resource utilization. Elasticsearch protects itself to merge up to 20MB/s. If it serves as your back office system, you can disable the index.store.throttle.type settings to none.
Step 6: Plan for Bulk Loading
Like any other data solution, you should data in bulks when possible to fasten your load and minimize resource utilization. This is the reason you should check Bulk API.
Step 7: Optimize Your Index
Run optimize on your index when it is stable (for example after a daily load) to verify best performance
Step 8: Enlarge the File Handler Limit
Like other data solutions, Elasticsearch utilizes a high number of file handlers. Make sure to add the following settings to /etc/security/limits.conf:
* soft nofile 64000
* hard nofile 64000
Step 9: Make RAM Space for Your Indexes
Elasticsearch is optimized to clusters w/ over 10GB RAM as its default room for indexes is 10% of its memory. Since the best practice is having at least 512MB for the index buffer size, if your system is so large, make sure you add the following configuration to: /etc/elasticsearch/elasticsearch.yml
Elasticsearch is optimized to clusters w/ over 10GB RAM as its default room for indexes is 10% of its memory. Since the best practice is having at least 512MB for the index buffer size, if your system is so large, make sure you add the following configuration to: /etc/elasticsearch/elasticsearch.yml
indices.memory.index_buffer_size=512M
Step 10: Change Mappings
Elasticsearch by default has some data mapping that may be avoided in your case to save disk space, memory and boost performance:
- The _source field that stores the original data
- The _all field combines all fields to a single one for special search for any
Step 11: Add Monitoring
You can either choose Marvel, the ELK management tool with the Kibana look that is part of the Enterprise package or make your own using open source solutions or hosted solutions like New Relic.
Step 12: Sharding
It none working, start sharding and adding nodes to your system.
Bottom Line
Elasticsearch is an amazing tool and with the right configuration it can keep serving your analytics needs even in the scale of billions of events.
Keep Performing,
Moshe Kaplan