Elasticsearch Explained: Trying to create too many scroll contexts. Must be less than or equal to 500
Hello Everyone, today we are going to discuss the following Error in Elasticsearch
"Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting"
Let's try to understand why this occurs and how we can solve it.
When & Why this error trigger?
As the title indicates, this error will come if you are using scroll API and especially multiple scrolls
Scrolls are expensive to run concurrently and reserves the resources for that particular time.
For each scroll ID, there is a unique point-in-time view of the current set of segments preserved for that scroll. This hangs on to files and related caches that would otherwise be removed by the constant segment rewriting that happens while indexing is active. This is why it is especially resource-intensive to do concurrently.
Let's dive a little deeper.
In order to use scrolling, the initial search request should specify the scroll
parameter in the query string, which tells Elasticsearch how long it should keep the “search context” alive. Its value (e.g. 1m
) does not need to be long enough to process all data — it just needs to be long enough to process the previous batch of results. Each scroll request (with the scroll
parameter) sets a new expiry time. If a scroll
request doesn’t pass in the scroll
parameter, then the search context will be freed as part of that scroll
request.
POST /twitter/_search?scroll=1m
{
"size": 100,
"query": {
"match" : {
"title" : "elasticsearch"
}
}
}
Normally, the background merge process optimizes the index by merging together smaller segments to create new bigger segments, at which time the smaller segments are deleted. This process continues during scrolling, but an open search context prevents the old segments from being deleted while they are still in use. This is how Elasticsearch is able to return the results of the initial search request, regardless of subsequent changes to documents.
How to Prevent & Fix it?
Now we know that concurrent scroll requests with more scroll time (60m) can use resources extensively and cause this issue.
In case you got this error and are not able to perform any update or delete operations on your cluster, either clear your scrolls or increase the size of max_open_scroll_context for a limited amount of time, till your scrolls are not cleared automatically within the specified time. It's not a recommended solution but to avoid any data loss or ongoing scroll APIs, this can be your savior.
Clear Scroll API:
Search contexts are automatically removed when the scroll
timeout has been exceeded. However keeping scrolls open has a cost, and should be explicitly cleared as soon as the scroll is not being used anymore using the clear-scroll
API:
DELETE /_search/scroll
{
"scroll_id" : "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ=="}
Increase the size of max_open_scroll_context
To prevent the against issues caused by having too many scrolls open, you can limit the number of open scrolls per node with the search.max_open_scroll_context
cluster setting (defaults to unlimited).
To check default size, please use this command:
http://127.0.0.1:9200/_cluster/settings?include_defaults=true&pretty=true
To update max_open_scroll_context size, you can use the following command.
curl -X PUT http://ip:9200/_cluster/settings -H 'Content-Type: application/json' -d'{
"persistent" : {
"search.max_open_scroll_context": 5000
},
"transient": {
"search.max_open_scroll_context": 5000
}
}'
Note: Don't forget to set it back to the lower number, once scroll time is elapsed already.
Thanks! Enjoy Programming!!
Reference Links:
https://www.elastic.co/guide/en/elasticsearch/reference/6.8/search-request-scroll.html
Comments
Post a Comment
Thanks for your valuable comments.