Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Include size of snapshot in snapshot metadata #29602

Merged
merged 25 commits into from
May 25, 2018
Merged
Show file tree
Hide file tree
Changes from 14 commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
6bcfcb8
Include size of snapshot in snapshot metadata
vladimirdolzhenko Apr 18, 2018
a08c807
Include size of snapshot in snapshot metadata - changes on Yannick's PR
vladimirdolzhenko Apr 19, 2018
50d7c78
Include size of snapshot in snapshot metadata - changes on 2nd Yannic…
vladimirdolzhenko Apr 20, 2018
6ea7e01
Include size of snapshot in snapshot metadata - changes on 3nd Yannic…
vladimirdolzhenko Apr 20, 2018
4ca0ba9
Include size of snapshot in snapshot metadata #29602
May 11, 2018
7ab7262
Include size of snapshot in snapshot metadata #29602
May 11, 2018
9a839f3
Revert "Include size of snapshot in snapshot metadata #29602"
May 11, 2018
4a0bbc7
Include size of snapshot in snapshot metadata #29602
May 11, 2018
3ebc769
#29602 added snapshot stats section to docs
May 14, 2018
828bb12
#18543 use "file_count" to eliminate "incremental", "processed" and "…
May 15, 2018
5582562
origin/master merged
May 17, 2018
f3c2306
fix doc generation
May 22, 2018
39e0190
adjusted docs, some comments
May 22, 2018
34243af
added section for 7.0 migration
May 23, 2018
816c7d9
Merge remote-tracking branch 'remotes/origin/master' into fix/18543
May 23, 2018
d13d991
Merge remote-tracking branch 'remotes/origin/master' into fix/18543
May 23, 2018
76fe2ac
typos fixed
May 23, 2018
c86c6e5
Merge remote-tracking branch 'remotes/origin/master' into fix/18543
May 23, 2018
4176564
Merge remote-tracking branch 'remotes/origin/master' into fix/18543
May 24, 2018
c05b191
added REST API test for snapshot/status (suspended before backporting…
May 25, 2018
563bea7
Merge remote-tracking branch 'remotes/origin/master' into fix/18543
May 25, 2018
e1400a8
added REST API test for snapshot/status with BWC fields
May 25, 2018
ccf7b22
fixed matching of time_in_millis/start_time_in_millis
May 25, 2018
f2a33d6
fix snapshot name for bwc test
May 25, 2018
7df83ef
fix snapshot name for bwc test
May 25, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion docs/reference/migration/migrate_7_0.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Elasticsearch 6.x in order to be readable by Elasticsearch 7.x.
* <<breaking_70_api_changes>>
* <<breaking_70_java_changes>>
* <<breaking_70_settings_changes>>

* <<breaking_70_snapshotstats_changes>>

include::migrate_7_0/aggregations.asciidoc[]
include::migrate_7_0/analysis.asciidoc[]
Expand All @@ -47,3 +47,4 @@ include::migrate_7_0/plugins.asciidoc[]
include::migrate_7_0/api.asciidoc[]
include::migrate_7_0/java.asciidoc[]
include::migrate_7_0/settings.asciidoc[]
include::migrate_7_0/snapshotstats.asciidoc[]
13 changes: 13 additions & 0 deletions docs/reference/migration/migrate_7_0/snapshotstats.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
[[breaking_70_snapshotstats_changes]]
=== Snapshot stats changes

Snapshot stats details are provided in a new structured way:

* `total` section for all the files that are referenced by the snapshot
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dot at the end of sentence

* `incremental` section for those files that actually needed to be copied over as part of the incremental snapshotting.
* In case of a snapshot that's still in progress, there's also a`processed` section for files that are in the process of being copied.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

space between a and processed


==== Deprecated `number_of_files`, `processed_files`, `total_size_in_bytes` and `processed_size_in_bytes` snapshot stats properties have been removed

* Properties `number_of_files` and `total_size_in_bytes` are removed and should be replaced by values of nested object `total`.
* Properties `processed_files` and `processed_size_in_bytes` are removed and should be replaced by values of nested object `processed`.
56 changes: 56 additions & 0 deletions docs/reference/modules/snapshots.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -563,6 +563,62 @@ GET /_snapshot/my_backup/snapshot_1/_status
// CONSOLE
// TEST[continued]

The output looks similar to the following:

[source,js]
--------------------------------------------------
{
"snapshots": [
{
"snapshot": "snapshot_1",
"repository": "my_backup",
"uuid": "XuBo4l4ISYiVg0nYUen9zg",
"state": "SUCCESS",
"include_global_state": true,
"shards_stats": {
"initializing": 0,
"started": 0,
"finalizing": 0,
"done": 5,
"failed": 0,
"total": 5
},
"stats": {
"incremental": {
"file_count": 8,
"size_in_bytes": 4704
},
"processed": {
"file_count": 7,
"size_in_bytes": 4254
},
"total": {
"file_count": 8,
"size_in_bytes": 4704
},
"start_time_in_millis": 1526280280355,
"time_in_millis": 358,

"number_of_files": 8,
"processed_files": 8,
"total_size_in_bytes": 4704,
"processed_size_in_bytes": 4704
}
}
]
}
--------------------------------------------------
// TESTRESPONSE

The output is composed of different sections. The `stats` sub-object provides details on the number and size of files that were
snapshotted. As snapshots are incremental, copying only the Lucene segments that are not already in the repository,
the `stats` object contains a `total` section for all the files that are referenced by the snapshot, as well as an `incremental` section
for those files that actually needed to be copied over as part of the incremental snapshotting. In case of a snapshot that's still
in progress, there's also a `processed` section that contains information about the files that are in the process of being copied.

_Note_: Properties `number_of_files`, `processed_files`, `total_size_in_bytes` and `processed_size_in_bytes` are used for
backward compatibility reasons with older 5.x and 6.x versions. These fields will be removed in Elasticsearch v7.0.0.

Multiple ids are also supported:

[source,sh]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -74,8 +74,8 @@ private SnapshotIndexShardStatus() {
throw new IllegalArgumentException("Unknown stage type " + indexShardStatus.getStage());
}
this.stats = new SnapshotStats(indexShardStatus.getStartTime(), indexShardStatus.getTotalTime(),
indexShardStatus.getNumberOfFiles(), indexShardStatus.getProcessedFiles(),
indexShardStatus.getTotalSize(), indexShardStatus.getProcessedSize());
indexShardStatus.getIncrementalFileCount(), indexShardStatus.getTotalFileCount(), indexShardStatus.getProcessedFileCount(),
indexShardStatus.getIncrementalSize(), indexShardStatus.getTotalSize(), indexShardStatus.getProcessedSize());
this.failure = indexShardStatus.getFailure();
this.nodeId = nodeId;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@

package org.elasticsearch.action.admin.cluster.snapshots.status;

import org.elasticsearch.Version;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Streamable;
Expand All @@ -34,19 +35,25 @@ public class SnapshotStats implements Streamable, ToXContentFragment {

private long startTime;
private long time;
private int numberOfFiles;
private int processedFiles;
private int incrementalFileCount;
private int totalFileCount;
private int processedFileCount;
private long incrementalSize;
private long totalSize;
private long processedSize;

SnapshotStats() {
}

SnapshotStats(long startTime, long time, int numberOfFiles, int processedFiles, long totalSize, long processedSize) {
SnapshotStats(long startTime, long time,
int incrementalFileCount, int totalFileCount, int processedFileCount,
long incrementalSize, long totalSize, long processedSize) {
this.startTime = startTime;
this.time = time;
this.numberOfFiles = numberOfFiles;
this.processedFiles = processedFiles;
this.incrementalFileCount = incrementalFileCount;
this.totalFileCount = totalFileCount;
this.processedFileCount = processedFileCount;
this.incrementalSize = incrementalSize;
this.totalSize = totalSize;
this.processedSize = processedSize;
}
Expand All @@ -66,17 +73,31 @@ public long getTime() {
}

/**
* Returns number of files in the snapshot
* Returns incremental file count of the snapshot
*/
public int getNumberOfFiles() {
return numberOfFiles;
public int getIncrementalFileCount() {
return incrementalFileCount;
}

/**
* Returns total number of files in the snapshot
*/
public int getTotalFileCount() {
return totalFileCount;
}

/**
* Returns number of files in the snapshot that were processed so far
*/
public int getProcessedFiles() {
return processedFiles;
public int getProcessedFileCount() {
return processedFileCount;
}

/**
* Return incremental files size of the snapshot
*/
public long getIncrementalSize() {
return incrementalSize;
}

/**
Expand Down Expand Up @@ -105,59 +126,109 @@ public void writeTo(StreamOutput out) throws IOException {
out.writeVLong(startTime);
out.writeVLong(time);

out.writeVInt(numberOfFiles);
out.writeVInt(processedFiles);
out.writeVInt(incrementalFileCount);
out.writeVInt(processedFileCount);

out.writeVLong(totalSize);
out.writeVLong(incrementalSize);
out.writeVLong(processedSize);

if (out.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {
out.writeVInt(totalFileCount);
out.writeVLong(totalSize);
}
}

@Override
public void readFrom(StreamInput in) throws IOException {
startTime = in.readVLong();
time = in.readVLong();

numberOfFiles = in.readVInt();
processedFiles = in.readVInt();
incrementalFileCount = in.readVInt();
processedFileCount = in.readVInt();

totalSize = in.readVLong();
incrementalSize = in.readVLong();
processedSize = in.readVLong();

if (in.getVersion().onOrAfter(Version.V_7_0_0_alpha1)) {
totalFileCount = in.readVInt();
totalSize = in.readVLong();
} else {
totalFileCount = incrementalFileCount;
totalSize = incrementalSize;
}
}

static final class Fields {
static final String STATS = "stats";

static final String INCREMENTAL = "incremental";
static final String PROCESSED = "processed";
static final String TOTAL = "total";

static final String FILE_COUNT = "file_count";
static final String SIZE = "size";
static final String SIZE_IN_BYTES = "size_in_bytes";

static final String START_TIME_IN_MILLIS = "start_time_in_millis";
static final String TIME_IN_MILLIS = "time_in_millis";
static final String TIME = "time";

// BWC
static final String NUMBER_OF_FILES = "number_of_files";
static final String PROCESSED_FILES = "processed_files";
static final String TOTAL_SIZE_IN_BYTES = "total_size_in_bytes";
static final String TOTAL_SIZE = "total_size";
static final String TOTAL_SIZE_IN_BYTES = "total_size_in_bytes";
static final String PROCESSED_SIZE_IN_BYTES = "processed_size_in_bytes";
static final String PROCESSED_SIZE = "processed_size";
static final String START_TIME_IN_MILLIS = "start_time_in_millis";
static final String TIME_IN_MILLIS = "time_in_millis";
static final String TIME = "time";

}

@Override
public XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params params) throws IOException {
builder.startObject(Fields.STATS);
builder.field(Fields.NUMBER_OF_FILES, getNumberOfFiles());
builder.field(Fields.PROCESSED_FILES, getProcessedFiles());
builder.humanReadableField(Fields.TOTAL_SIZE_IN_BYTES, Fields.TOTAL_SIZE, new ByteSizeValue(getTotalSize()));
builder.humanReadableField(Fields.PROCESSED_SIZE_IN_BYTES, Fields.PROCESSED_SIZE, new ByteSizeValue(getProcessedSize()));
builder.field(Fields.START_TIME_IN_MILLIS, getStartTime());
builder.humanReadableField(Fields.TIME_IN_MILLIS, Fields.TIME, new TimeValue(getTime()));
builder.endObject();
return builder;
builder.startObject(Fields.STATS)
// incremental starts
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you mean "stats", not "starts"? Idem for the other comments in this method

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nope, i meant "starts" as "begins"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right, ok

.startObject(Fields.INCREMENTAL)
.field(Fields.FILE_COUNT, getIncrementalFileCount())
.humanReadableField(Fields.SIZE_IN_BYTES, Fields.SIZE, new ByteSizeValue(getIncrementalSize()))
// incremental ends
.endObject();

if (getProcessedFileCount() != getIncrementalFileCount()) {
// processed starts
builder.startObject(Fields.PROCESSED)
.field(Fields.FILE_COUNT, getProcessedFileCount())
.humanReadableField(Fields.SIZE_IN_BYTES, Fields.SIZE, new ByteSizeValue(getProcessedSize()))
// processed ends
.endObject();
}
// total starts
builder.startObject(Fields.TOTAL)
.field(Fields.FILE_COUNT, getTotalFileCount())
.humanReadableField(Fields.SIZE_IN_BYTES, Fields.SIZE, new ByteSizeValue(getTotalSize()))
// total ends
.endObject();
// timings stats
builder.field(Fields.START_TIME_IN_MILLIS, getStartTime())
.humanReadableField(Fields.TIME_IN_MILLIS, Fields.TIME, new TimeValue(getTime()));

// BWC part
return builder.field(Fields.NUMBER_OF_FILES, getIncrementalFileCount())
.field(Fields.PROCESSED_FILES, getProcessedFileCount())
.humanReadableField(Fields.TOTAL_SIZE_IN_BYTES, Fields.TOTAL_SIZE, new ByteSizeValue(getIncrementalSize()))
.humanReadableField(Fields.PROCESSED_SIZE_IN_BYTES, Fields.PROCESSED_SIZE, new ByteSizeValue(getProcessedSize()))
// BWC part ends
.endObject();
}

void add(SnapshotStats stats) {
numberOfFiles += stats.numberOfFiles;
processedFiles += stats.processedFiles;
incrementalFileCount += stats.incrementalFileCount;
totalFileCount += stats.totalFileCount;
processedFileCount += stats.processedFileCount;

incrementalSize += stats.incrementalSize;
totalSize += stats.totalSize;
processedSize += stats.processedSize;


if (startTime == 0) {
// First time here
startTime = stats.startTime;
Expand Down
Loading