Identify any Amazon ElasticSearch (ES) clusters that appear to run low on disk space and scale them up (add EBS-based storage) to help mitigate any issues triggered by insufficient disk space and improve their I/O performance. The default threshold value set for the amount of free storage space is 10% as any value below this could have a serious impact on your ES clusters performance. For example, if the free storage space becomes dangerously low, your clusters can start blocking incoming write requests.
The AWS CloudWatch metric used to detect ElasticSearch clusters with low free storage space is:
FreeStorageSpace – the amount of available storage space for all data nodes in the cluster (Units: Megabytes). AWS ES service throws an "ClusterBlockException" error when this metric reaches 0.
This rule can help you work with the AWS Well-Architected Framework
This rule resolution is part of the Cloud Conformity Security & Compliance tool for AWS
Low disk space leads to instability and slowdowns. Detecting ES clusters that run low on disk space is crucial, especially when these AWS resources are used in production because when ES clusters run out of free storage space, basic write operations like adding documents and creating indices begin to fail.
Note: You can change the default threshold value (10%) for this rule on Cloud Conformity console and set your own value for the amount of available storage space to configure the storage limits for your ElasticSearch clusters.
To identify AWS ElasticSearch clusters that run low on disk space, perform the following actions:
To expand the storage space for AWS Elasticsearch clusters that run low on disk space, you can scale them up by adding storage to the existing data nodes volumes. To scale up and recover from the lack of free disk space, perform the following actions: