Skip to content

Commit

Permalink
Add a _freeze / _unfreeze API (#35592)
Browse files Browse the repository at this point in the history
This commit adds a rest endpoint for freezing and unfreezing an index.
Among other cleanups mainly fixing an issue accessing package private APIs
from a plugin that got caught by integration tests this change also adds
documentation for frozen indices.
Note: frozen indices are marked as `beta` and available as a basic feature.

Relates to #34352
  • Loading branch information
s1monw committed Nov 20, 2018
1 parent 4752552 commit fa7679e
Show file tree
Hide file tree
Showing 23 changed files with 796 additions and 118 deletions.
56 changes: 56 additions & 0 deletions docs/reference/frozen-indices.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
[role="xpack"]
[testenv="basic"]
[[frozen-indices]]
= Frozen Indices

[partintro]
--
Elasticsearch indices can require a significant amount of memory available in order to be open and searchable. Yet, not all indices need
to be writable at the same time and have different access patterns over time. For example, indices in the time series or logging use cases
are unlikely to be queried once they age out but still need to be kept around for retention policy purposes.

In order to keep indices available and queryable for a longer period but at the same time reduce their hardware requirements they can be transitioned
into a frozen state. Once an index is frozen, all of its transient shard memory (aside from mappings and analyzers)
is moved to persistent storage. This allows for a much higher disk to heap storage ratio on individual nodes. Once an index is
frozen, it is made read-only and drops its transient data structures from memory. These data structures will need to be reloaded on demand (and subsequently dropped) for each search request that targets the frozen index. A search request that hits
one or more frozen shards will be executed on a throttled threadpool that ensures that we never search more than
`N` (`1` by default) searches concurrently (see <<search-throttled>>). This protects nodes from exceeding the available memory due to incoming search requests.

In contrast to ordinary open indices, frozen indices are expected to execute slowly and are not designed for high query load. Parallelism is
gained only on a per-node level and loading data-structures on demand is expected to be one or more orders of a magnitude slower than query
execution on a per shard level. Depending on the data in an index, a frozen index may execute searches in the seconds to minutes range, when the same index in an unfrozen state may execute the same search request in milliseconds.
--

== Best Practices

Since frozen indices provide a much higher disk to heap ratio at the expense of search latency, it is advisable to allocate frozen indices to
dedicated nodes to prevent searches on frozen indices influencing traffic on low latency nodes. There is significant overhead in loading
data structures on demand which can cause page faults and garbage collections, which further slow down query execution.

Since indices that are eligible for freezing are unlikely to change in the future, disk space can be optimized as described in <<tune-for-disk-usage>>.

== Searching a frozen index

Frozen indices are throttled in order to limit memory consumptions per node. The number of concurrently loaded frozen indices per node is
limited by the number of threads in the <<search-throttled>> threadpool, which is `1` by default.
Search requests will not be executed against frozen indices by default, even if a frozen index is named explicitly. This is
to prevent accidental slowdowns by targeting a frozen index by mistake. To include frozen indices a search request must be executed with
the query parameter `ignore_throttled=false`.

[source,js]
--------------------------------------------------
GET /twitter/_search?q=user:kimchy&ignore_throttled=false
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]

[IMPORTANT]
================================
While frozen indices are slow to search, they can be pre-filtered efficiently. The request parameter `pre_filter_shard_size` specifies
a threshold that, when exceeded, will enforce a round-trip to pre-filter search shards that cannot possibly match.
This filter phase can limit the number of shards significantly. For instance, if a date range filter is applied, then all indices (frozen or unfrozen) that do not contain documents within the date range can be skipped efficiently.
The default value for `pre_filter_shard_size` is `128` but it's recommended to set it to `1` when searching frozen indices. There is no
significant overhead associated with this pre-filter phase.
================================


2 changes: 2 additions & 0 deletions docs/reference/index.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,8 @@ include::monitoring/index.asciidoc[]

include::rollup/index.asciidoc[]

include::frozen-indices.asciidoc[]

include::rest-api/index.asciidoc[]

include::commands/index.asciidoc[]
Expand Down
50 changes: 50 additions & 0 deletions docs/reference/indices/apis/freeze.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
[role="xpack"]
[testenv="basic"]
[[freeze-index-api]]
== Freeze Index API
++++
<titleabbrev>Freeze Index</titleabbrev>
++++

Freezes an index.

[float]
=== Request

`POST /<index>/_freeze`

[float]
=== Description

A frozen index has almost no overhead on the cluster (except
for maintaining its metadata in memory), and is blocked for write operations.
See <<frozen-indices>> and <<unfreeze-index-api>>.

[float]
=== Path Parameters

`index` (required)::
(string) Identifier for the index

//=== Query Parameters

//=== Authorization

[float]
=== Examples

The following example freezes and unfreezes an index:

[source,js]
--------------------------------------------------
POST /my_index/_freeze
POST /my_index/_unfreeze
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT my_index\n/]

[IMPORTANT]
================================
Freezing an index will close the index and reopen it within the same API call. This causes primaries to not be allocated for a short
amount of time and causes the cluster to go red until the primaries are allocated again. This limitation might be removed in the future.
================================
50 changes: 50 additions & 0 deletions docs/reference/indices/apis/unfreeze.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
[role="xpack"]
[testenv="basic"]
[[unfreeze-index-api]]
== Unfreeze Index API
++++
<titleabbrev>Unfreeze Index</titleabbrev>
++++

Unfreezes an index.

[float]
=== Request

`POST /<index>/_unfreeze`

[float]
=== Description

When a frozen index is unfrozen, the index goes through the normal recovery
process and becomes writeable again. See <<frozen-indices>> and <<freeze-index-api>>.

[float]
=== Path Parameters

`index` (required)::
(string) Identifier for the index


//=== Query Parameters

//=== Authorization

[float]
=== Examples

The following example freezes and unfreezes an index:

[source,js]
--------------------------------------------------
POST /my_index/_freeze
POST /my_index/_unfreeze
--------------------------------------------------
// CONSOLE
// TEST[s/^/PUT my_index\n/]

[IMPORTANT]
================================
Freezing an index will close the index and reopen it within the same API call. This causes primaries to not be allocated for a short
amount of time and causes the cluster to go red until the primaries are allocated again. This limitation might be removed in the future.
================================
4 changes: 4 additions & 0 deletions docs/reference/modules/threadpool.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,10 @@ There are several thread pools, but the important ones include:
`int((# of available_processors * 3) / 2) + 1`, and initial queue_size of
`1000`.

[[search-throttled]]`search_throttled`::
For count/search/suggest/get operations on `search_throttled indices`. Thread pool type is
`fixed_auto_queue_size` with a size of `1`, and initial queue_size of `100`.

`get`::
For get operations. Thread pool type is `fixed`
with a size of `# of available processors`,
Expand Down
3 changes: 3 additions & 0 deletions docs/reference/rest-api/index.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ directly to configure and access {xpack} features.
* <<info-api,Info API>>
* <<ccr-apis,Cross-cluster replication APIs>>
* <<graph-explore-api,Graph Explore API>>
* <<freeze-index-api>>, <<unfreeze-index-api>>
* <<index-lifecycle-management-api,Index lifecycle management APIs>>
* <<licensing-apis,Licensing APIs>>
* <<ml-apis,Machine Learning APIs>>
Expand All @@ -23,11 +24,13 @@ directly to configure and access {xpack} features.
include::info.asciidoc[]
include::{es-repo-dir}/ccr/apis/ccr-apis.asciidoc[]
include::{es-repo-dir}/graph/explore.asciidoc[]
include::{es-repo-dir}/indices/apis/freeze.asciidoc[]
include::{es-repo-dir}/ilm/apis/ilm-api.asciidoc[]
include::{es-repo-dir}/licensing/index.asciidoc[]
include::{es-repo-dir}/migration/migration.asciidoc[]
include::{es-repo-dir}/ml/apis/ml-api.asciidoc[]
include::{es-repo-dir}/rollup/rollup-api.asciidoc[]
include::{xes-repo-dir}/rest-api/security.asciidoc[]
include::{es-repo-dir}/indices/apis/unfreeze.asciidoc[]
include::{xes-repo-dir}/rest-api/watcher.asciidoc[]
include::defs.asciidoc[]
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ protected void masterOperation(final CloseIndexRequest request, final ClusterSta
.ackTimeout(request.timeout()).masterNodeTimeout(request.masterNodeTimeout())
.indices(concreteIndices);

indexStateService.closeIndex(updateRequest, new ActionListener<ClusterStateUpdateResponse>() {
indexStateService.closeIndices(updateRequest, new ActionListener<ClusterStateUpdateResponse>() {

@Override
public void onResponse(ClusterStateUpdateResponse response) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ public class OpenIndexClusterStateUpdateRequest extends IndicesClusterStateUpdat

private ActiveShardCount waitForActiveShards = ActiveShardCount.DEFAULT;

OpenIndexClusterStateUpdateRequest() {
public OpenIndexClusterStateUpdateRequest() {

}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,10 +40,10 @@ public class OpenIndexResponse extends ShardsAcknowledgedResponse {
declareAcknowledgedAndShardsAcknowledgedFields(PARSER);
}

OpenIndexResponse() {
public OpenIndexResponse() {
}

OpenIndexResponse(boolean acknowledged, boolean shardsAcknowledged) {
public OpenIndexResponse(boolean acknowledged, boolean shardsAcknowledged) {
super(acknowledged, shardsAcknowledged);
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ public MetaDataIndexStateService(ClusterService clusterService, AllocationServic
this.activeShardsObserver = new ActiveShardsObserver(clusterService, threadPool);
}

public void closeIndex(final CloseIndexClusterStateUpdateRequest request, final ActionListener<ClusterStateUpdateResponse> listener) {
public void closeIndices(final CloseIndexClusterStateUpdateRequest request, final ActionListener<ClusterStateUpdateResponse> listener) {
if (request.indices() == null || request.indices().length == 0) {
throw new IllegalArgumentException("Index name is required");
}
Expand All @@ -99,46 +99,50 @@ protected ClusterStateUpdateResponse newResponse(boolean acknowledged) {

@Override
public ClusterState execute(ClusterState currentState) {
Set<IndexMetaData> indicesToClose = new HashSet<>();
for (Index index : request.indices()) {
final IndexMetaData indexMetaData = currentState.metaData().getIndexSafe(index);
if (indexMetaData.getState() != IndexMetaData.State.CLOSE) {
indicesToClose.add(indexMetaData);
}
}
return closeIndices(currentState, request.indices(), indicesAsString);
}
});
}

if (indicesToClose.isEmpty()) {
return currentState;
}
public ClusterState closeIndices(ClusterState currentState, final Index[] indices, String indicesAsString) {
Set<IndexMetaData> indicesToClose = new HashSet<>();
for (Index index : indices) {
final IndexMetaData indexMetaData = currentState.metaData().getIndexSafe(index);
if (indexMetaData.getState() != IndexMetaData.State.CLOSE) {
indicesToClose.add(indexMetaData);
}
}

// Check if index closing conflicts with any running restores
RestoreService.checkIndexClosing(currentState, indicesToClose);
// Check if index closing conflicts with any running snapshots
SnapshotsService.checkIndexClosing(currentState, indicesToClose);
logger.info("closing indices [{}]", indicesAsString);
if (indicesToClose.isEmpty()) {
return currentState;
}

MetaData.Builder mdBuilder = MetaData.builder(currentState.metaData());
ClusterBlocks.Builder blocksBuilder = ClusterBlocks.builder()
.blocks(currentState.blocks());
for (IndexMetaData openIndexMetadata : indicesToClose) {
final String indexName = openIndexMetadata.getIndex().getName();
mdBuilder.put(IndexMetaData.builder(openIndexMetadata).state(IndexMetaData.State.CLOSE));
blocksBuilder.addIndexBlock(indexName, INDEX_CLOSED_BLOCK);
}
// Check if index closing conflicts with any running restores
RestoreService.checkIndexClosing(currentState, indicesToClose);
// Check if index closing conflicts with any running snapshots
SnapshotsService.checkIndexClosing(currentState, indicesToClose);
logger.info("closing indices [{}]", indicesAsString);

MetaData.Builder mdBuilder = MetaData.builder(currentState.metaData());
ClusterBlocks.Builder blocksBuilder = ClusterBlocks.builder()
.blocks(currentState.blocks());
for (IndexMetaData openIndexMetadata : indicesToClose) {
final String indexName = openIndexMetadata.getIndex().getName();
mdBuilder.put(IndexMetaData.builder(openIndexMetadata).state(IndexMetaData.State.CLOSE));
blocksBuilder.addIndexBlock(indexName, INDEX_CLOSED_BLOCK);
}

ClusterState updatedState = ClusterState.builder(currentState).metaData(mdBuilder).blocks(blocksBuilder).build();
ClusterState updatedState = ClusterState.builder(currentState).metaData(mdBuilder).blocks(blocksBuilder).build();

RoutingTable.Builder rtBuilder = RoutingTable.builder(currentState.routingTable());
for (IndexMetaData index : indicesToClose) {
rtBuilder.remove(index.getIndex().getName());
}
RoutingTable.Builder rtBuilder = RoutingTable.builder(currentState.routingTable());
for (IndexMetaData index : indicesToClose) {
rtBuilder.remove(index.getIndex().getName());
}

//no explicit wait for other nodes needed as we use AckedClusterStateUpdateTask
return allocationService.reroute(
ClusterState.builder(updatedState).routingTable(rtBuilder.build()).build(),
"indices closed [" + indicesAsString + "]");
}
});
//no explicit wait for other nodes needed as we use AckedClusterStateUpdateTask
return allocationService.reroute(
ClusterState.builder(updatedState).routingTable(rtBuilder.build()).build(),
"indices closed [" + indicesAsString + "]");
}

public void openIndex(final OpenIndexClusterStateUpdateRequest request,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@

import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexCommit;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.SegmentCommitInfo;
import org.apache.lucene.index.SegmentInfos;
Expand Down Expand Up @@ -66,7 +67,7 @@ public class ReadOnlyEngine extends Engine {
private final IndexCommit indexCommit;
private final Lock indexWriterLock;
private final DocsStats docsStats;
protected final RamAccountingSearcherFactory searcherFactory;
private final RamAccountingSearcherFactory searcherFactory;

/**
* Creates a new ReadOnlyEngine. This ctor can also be used to open a read-only engine on top of an already opened
Expand Down Expand Up @@ -414,4 +415,8 @@ public void updateMaxUnsafeAutoIdTimestamp(long newTimestamp) {
public void initializeMaxSeqNoOfUpdatesOrDeletes() {
advanceMaxSeqNoOfUpdatesOrDeletes(seqNoStats.getMaxSeqNo());
}

protected void processReaders(IndexReader reader, IndexReader previousReader) {
searcherFactory.processReaders(reader, previousReader);
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -389,14 +389,26 @@ protected IndexShard reinitShard(IndexShard current, IndexingOperationListener..
* @param listeners new listerns to use for the newly created shard
*/
protected IndexShard reinitShard(IndexShard current, ShardRouting routing, IndexingOperationListener... listeners) throws IOException {
return reinitShard(current, routing, current.engineFactory, listeners);
}

/**
* Takes an existing shard, closes it and starts a new initialing shard at the same location
*
* @param routing the shard routing to use for the newly created shard.
* @param listeners new listerns to use for the newly created shard
* @param engineFactory the engine factory for the new shard
*/
protected IndexShard reinitShard(IndexShard current, ShardRouting routing, EngineFactory engineFactory,
IndexingOperationListener... listeners) throws IOException {
closeShards(current);
return newShard(
routing,
current.shardPath(),
current.indexSettings().getIndexMetaData(),
null,
null,
current.engineFactory,
engineFactory,
current.getGlobalCheckpointSyncer(),
EMPTY_EVENT_LISTENER, listeners);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ private synchronized DirectoryReader getOrOpenReader() throws IOException {
listeners.beforeRefresh();
}
reader = DirectoryReader.open(engineConfig.getStore().directory());
searcherFactory.processReaders(reader, null);
processReaders(reader, null);
reader = lastOpenedReader = wrapReader(reader, Function.identity());
reader.getReaderCacheHelper().addClosedListener(this::onReaderClosed);
for (ReferenceManager.RefreshListener listeners : config ().getInternalRefreshListener()) {
Expand Down
Loading

0 comments on commit fa7679e

Please sign in to comment.