Skip to content

Commit

Permalink
Merge branch 'master' into bump-compiler-to-jdk-10
Browse files Browse the repository at this point in the history
* master:
  Fix BWC issue for PreSyncedFlushResponse
  Remove BytesArray and BytesReference usage from XContentFactory (elastic#29151)
  Add pluggable XContentBuilder writers and human readable writers (elastic#29120)
  Add unreleased version 6.2.4 (elastic#29171)
  Add unreleased version 6.1.5 (elastic#29168)
  Add a note about using the `retry_failed` flag before accepting data loss (elastic#29160)
  Fix typo in percolate-query.asciidoc (elastic#29155)
  Require HTTP::Tiny 0.070 for release notes script
  • Loading branch information
jasontedor committed Mar 20, 2018
2 parents 89cd1cc + f938c42 commit 0a63e21
Show file tree
Hide file tree
Showing 55 changed files with 314 additions and 198 deletions.
2 changes: 1 addition & 1 deletion dev-tools/es_release_notes.pl
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
use strict;
use warnings;

use HTTP::Tiny;
use HTTP::Tiny 0.070;
use IO::Socket::SSL 1.52;
use utf8;

Expand Down
34 changes: 20 additions & 14 deletions docs/reference/cluster/reroute.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -83,8 +83,26 @@ Reasons why a primary shard cannot be automatically allocated include the follow
the cluster. To prevent data loss, the system does not automatically promote a stale
shard copy to primary.

As a manual override, two commands to forcefully allocate primary shards
are available:
[float]
=== Retry failed shards

The cluster will attempt to allocate a shard a maximum of
`index.allocation.max_retries` times in a row (defaults to `5`), before giving
up and leaving the shard unallocated. This scenario can be caused by
structural problems such as having an analyzer which refers to a stopwords
file which doesn't exist on all nodes.

Once the problem has been corrected, allocation can be manually retried by
calling the <<cluster-reroute,`reroute`>> API with `?retry_failed`, which
will attempt a single retry round for these shards.

[float]
=== Forced allocation on unrecoverable errors

The following two commands are dangerous and may result in data loss. They are
meant to be used in cases where the original data can not be recovered and the cluster
administrator accepts the loss. If you have suffered a temporary issue that has been
fixed, please see the `retry_failed` flag described above.

`allocate_stale_primary`::
Allocate a primary shard to a node that holds a stale copy. Accepts the
Expand All @@ -108,15 +126,3 @@ are available:
this command requires the special field `accept_data_loss` to be
explicitly set to `true` for it to work.

[float]
=== Retry failed shards

The cluster will attempt to allocate a shard a maximum of
`index.allocation.max_retries` times in a row (defaults to `5`), before giving
up and leaving the shard unallocated. This scenario can be caused by
structural problems such as having an analyzer which refers to a stopwords
file which doesn't exist on all nodes.

Once the problem has been corrected, allocation can be manually retried by
calling the <<cluster-reroute,`reroute`>> API with `?retry_failed`, which
will attempt a single retry round for these shards.
2 changes: 1 addition & 1 deletion docs/reference/query-dsl/percolate-query.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ the `percolator` query before it gets indexed into a temporary index.
The `query` field is used for indexing the query documents. It will hold a
json object that represents an actual Elasticsearch query. The `query` field
has been configured to use the <<percolator,percolator field type>>. This field
type understands the query dsl and stored the query in such a way that it can be
type understands the query dsl and stores the query in such a way that it can be
used later on to match documents defined on the `percolate` query.

Register a query in the percolator:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ public class PercolateQueryBuilder extends AbstractQueryBuilder<PercolateQueryBu
*/
@Deprecated
public PercolateQueryBuilder(String field, String documentType, BytesReference document) {
this(field, documentType, Collections.singletonList(document), XContentFactory.xContentType(document));
this(field, documentType, Collections.singletonList(document), XContentHelper.xContentType(document));
}

/**
Expand Down Expand Up @@ -276,7 +276,7 @@ public PercolateQueryBuilder(String field, String documentType, String indexedDo
if (in.getVersion().onOrAfter(Version.V_5_3_0)) {
documentXContentType = in.readEnum(XContentType.class);
} else {
documentXContentType = XContentFactory.xContentType(documents.iterator().next());
documentXContentType = XContentHelper.xContentType(documents.iterator().next());
}
} else {
documentXContentType = null;
Expand Down Expand Up @@ -525,7 +525,7 @@ protected QueryBuilder doRewrite(QueryRewriteContext queryShardContext) {
return this; // not executed yet
} else {
return new PercolateQueryBuilder(field, documentType, Collections.singletonList(source),
XContentFactory.xContentType(source));
XContentHelper.xContentType(source));
}
}
GetRequest getRequest = new GetRequest(indexedDocumentIndex, indexedDocumentType, indexedDocumentId);
Expand Down
12 changes: 12 additions & 0 deletions server/src/main/java/org/elasticsearch/Version.java
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,10 @@ public class Version implements Comparable<Version> {
public static final Version V_6_1_2 = new Version(V_6_1_2_ID, org.apache.lucene.util.Version.LUCENE_7_1_0);
public static final int V_6_1_3_ID = 6010399;
public static final Version V_6_1_3 = new Version(V_6_1_3_ID, org.apache.lucene.util.Version.LUCENE_7_1_0);
public static final int V_6_1_4_ID = 6010499;
public static final Version V_6_1_4 = new Version(V_6_1_4_ID, org.apache.lucene.util.Version.LUCENE_7_1_0);
public static final int V_6_1_5_ID = 6010599;
public static final Version V_6_1_5 = new Version(V_6_1_5_ID, org.apache.lucene.util.Version.LUCENE_7_1_0);
public static final int V_6_2_0_ID = 6020099;
public static final Version V_6_2_0 = new Version(V_6_2_0_ID, org.apache.lucene.util.Version.LUCENE_7_2_1);
public static final int V_6_2_1_ID = 6020199;
Expand All @@ -155,6 +159,8 @@ public class Version implements Comparable<Version> {
public static final Version V_6_2_2 = new Version(V_6_2_2_ID, org.apache.lucene.util.Version.LUCENE_7_2_1);
public static final int V_6_2_3_ID = 6020399;
public static final Version V_6_2_3 = new Version(V_6_2_3_ID, org.apache.lucene.util.Version.LUCENE_7_2_1);
public static final int V_6_2_4_ID = 6020499;
public static final Version V_6_2_4 = new Version(V_6_2_4_ID, org.apache.lucene.util.Version.LUCENE_7_2_1);
public static final int V_6_3_0_ID = 6030099;
public static final Version V_6_3_0 = new Version(V_6_3_0_ID, org.apache.lucene.util.Version.LUCENE_7_2_1);
public static final int V_7_0_0_alpha1_ID = 7000001;
Expand All @@ -177,6 +183,8 @@ public static Version fromId(int id) {
return V_7_0_0_alpha1;
case V_6_3_0_ID:
return V_6_3_0;
case V_6_2_4_ID:
return V_6_2_4;
case V_6_2_3_ID:
return V_6_2_3;
case V_6_2_2_ID:
Expand All @@ -185,6 +193,10 @@ public static Version fromId(int id) {
return V_6_2_1;
case V_6_2_0_ID:
return V_6_2_0;
case V_6_1_5_ID:
return V_6_1_5;
case V_6_1_4_ID:
return V_6_1_4;
case V_6_1_3_ID:
return V_6_1_3;
case V_6_1_2_ID:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -245,7 +245,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws
builder.field(DELAYED_UNASSIGNED_SHARDS, getDelayedUnassignedShards());
builder.field(NUMBER_OF_PENDING_TASKS, getNumberOfPendingTasks());
builder.field(NUMBER_OF_IN_FLIGHT_FETCH, getNumberOfInFlightFetch());
builder.timeValueField(TASK_MAX_WAIT_TIME_IN_QUEUE_IN_MILLIS, TASK_MAX_WAIT_TIME_IN_QUEUE, getTaskMaxWaitingTime());
builder.humanReadableField(TASK_MAX_WAIT_TIME_IN_QUEUE_IN_MILLIS, TASK_MAX_WAIT_TIME_IN_QUEUE, getTaskMaxWaitingTime());
builder.percentageField(ACTIVE_SHARDS_PERCENT_AS_NUMBER, ACTIVE_SHARDS_PERCENT, getActiveShardsPercent());

String level = params.param("level", "cluster");
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws
builder.field("version", nodeInfo.getVersion());
builder.field("build_hash", nodeInfo.getBuild().shortHash());
if (nodeInfo.getTotalIndexingBuffer() != null) {
builder.byteSizeField("total_indexing_buffer", "total_indexing_buffer_in_bytes", nodeInfo.getTotalIndexingBuffer());
builder.humanReadableField("total_indexing_buffer", "total_indexing_buffer_in_bytes", nodeInfo.getTotalIndexingBuffer());
}

builder.startArray("roles");
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Streamable;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.ToXContentFragment;
import org.elasticsearch.common.xcontent.XContentBuilder;
Expand Down Expand Up @@ -143,7 +144,7 @@ public XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params par
builder.byteSizeField(Fields.TOTAL_SIZE_IN_BYTES, Fields.TOTAL_SIZE, getTotalSize());
builder.byteSizeField(Fields.PROCESSED_SIZE_IN_BYTES, Fields.PROCESSED_SIZE, getProcessedSize());
builder.field(Fields.START_TIME_IN_MILLIS, getStartTime());
builder.timeValueField(Fields.TIME_IN_MILLIS, Fields.TIME, getTime());
builder.humanReadableField(Fields.TIME_IN_MILLIS, Fields.TIME, new TimeValue(getTime()));
builder.endObject();
return builder;
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -488,7 +488,7 @@ static final class Fields {
@Override
public XContentBuilder toXContent(XContentBuilder builder, Params params)
throws IOException {
builder.timeValueField(Fields.MAX_UPTIME_IN_MILLIS, Fields.MAX_UPTIME, maxUptime);
builder.humanReadableField(Fields.MAX_UPTIME_IN_MILLIS, Fields.MAX_UPTIME, new TimeValue(maxUptime));
builder.startArray(Fields.VERSIONS);
for (ObjectIntCursor<JvmVersion> v : versions) {
builder.startObject();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ public void readFrom(StreamInput in) throws IOException {
if (in.getVersion().onOrAfter(Version.V_5_3_0)) {
xContentType = in.readEnum(XContentType.class);
} else {
xContentType = XContentFactory.xContentType(content);
xContentType = XContentHelper.xContentType(content);
}
if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha2)) {
context = in.readOptionalString();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ static void toXContent(XContentBuilder builder, Sort sort) throws IOException {
static void toXContent(XContentBuilder builder, Accountable tree) throws IOException {
builder.startObject();
builder.field(Fields.DESCRIPTION, tree.toString());
builder.byteSizeField(Fields.SIZE_IN_BYTES, Fields.SIZE, new ByteSizeValue(tree.ramBytesUsed()));
builder.humanReadableField(Fields.SIZE_IN_BYTES, Fields.SIZE, new ByteSizeValue(tree.ramBytesUsed()));
Collection<Accountable> children = tree.getChildResources();
if (children.isEmpty() == false) {
builder.startArray(Fields.CHILDREN);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.common.xcontent.XContentHelper;
import org.elasticsearch.common.xcontent.XContentType;

import java.io.IOException;
Expand All @@ -43,7 +43,7 @@ public class PutPipelineRequest extends AcknowledgedRequest<PutPipelineRequest>
*/
@Deprecated
public PutPipelineRequest(String id, BytesReference source) {
this(id, source, XContentFactory.xContentType(source));
this(id, source, XContentHelper.xContentType(source));
}

/**
Expand Down Expand Up @@ -83,7 +83,7 @@ public void readFrom(StreamInput in) throws IOException {
if (in.getVersion().onOrAfter(Version.V_5_3_0)) {
xContentType = in.readEnum(XContentType.class);
} else {
xContentType = XContentFactory.xContentType(source);
xContentType = XContentHelper.xContentType(source);
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,7 @@
import org.elasticsearch.common.bytes.BytesReference;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.common.xcontent.XContentHelper;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.index.VersionType;
import org.elasticsearch.ingest.ConfigurationUtils;
Expand Down Expand Up @@ -56,7 +55,7 @@ public class SimulatePipelineRequest extends ActionRequest {
*/
@Deprecated
public SimulatePipelineRequest(BytesReference source) {
this(source, XContentFactory.xContentType(source));
this(source, XContentHelper.xContentType(source));
}

/**
Expand All @@ -78,7 +77,7 @@ public SimulatePipelineRequest(BytesReference source, XContentType xContentType)
if (in.getVersion().onOrAfter(Version.V_5_3_0)) {
xContentType = in.readEnum(XContentType.class);
} else {
xContentType = XContentFactory.xContentType(source);
xContentType = XContentHelper.xContentType(source);
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
import org.elasticsearch.common.lucene.uid.Versions;
import org.elasticsearch.common.util.set.Sets;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.common.xcontent.XContentHelper;
import org.elasticsearch.common.xcontent.XContentParser;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.index.VersionType;
Expand Down Expand Up @@ -265,7 +265,7 @@ public TermVectorsRequest doc(XContentBuilder documentBuilder) {
*/
@Deprecated
public TermVectorsRequest doc(BytesReference doc, boolean generateRandomId) {
return this.doc(doc, generateRandomId, XContentFactory.xContentType(doc));
return this.doc(doc, generateRandomId, XContentHelper.xContentType(doc));
}

/**
Expand Down Expand Up @@ -518,7 +518,7 @@ public void readFrom(StreamInput in) throws IOException {
if (in.getVersion().onOrAfter(Version.V_5_3_0)) {
xContentType = in.readEnum(XContentType.class);
} else {
xContentType = XContentFactory.xContentType(doc);
xContentType = XContentHelper.xContentType(doc);
}
}
routing = in.readOptionalString();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.io.stream.Writeable;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.snapshots.Snapshot;

Expand Down Expand Up @@ -145,7 +146,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws
{
builder.field("repository", entry.snapshot.getRepository());
builder.field("snapshot", entry.snapshot.getSnapshotId().getName());
builder.timeValueField("start_time_millis", "start_time", entry.startTime);
builder.humanReadableField("start_time_millis", "start_time", new TimeValue(entry.startTime));
builder.field("repository_state_id", entry.repositoryStateId);
}
builder.endObject();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@
import org.elasticsearch.common.collect.ImmutableOpenMap;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.common.xcontent.ToXContent;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.index.shard.ShardId;
Expand Down Expand Up @@ -512,7 +513,7 @@ public void toXContent(Entry entry, XContentBuilder builder, ToXContent.Params p
}
}
builder.endArray();
builder.timeValueField(START_TIME_MILLIS, START_TIME, entry.startTime());
builder.humanReadableField(START_TIME_MILLIS, START_TIME, new TimeValue(entry.startTime()));
builder.field(REPOSITORY_STATE_ID, entry.getRepositoryStateId());
builder.startArray(SHARDS);
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -289,8 +289,10 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws
builder.field("allocation_id", allocationId);
}
if (allocationStatus == AllocationStatus.DELAYED_ALLOCATION) {
builder.timeValueField("configured_delay_in_millis", "configured_delay", TimeValue.timeValueMillis(configuredDelayInMillis));
builder.timeValueField("remaining_delay_in_millis", "remaining_delay", TimeValue.timeValueMillis(remainingDelayInMillis));
builder.humanReadableField("configured_delay_in_millis", "configured_delay",
TimeValue.timeValueMillis(configuredDelayInMillis));
builder.humanReadableField("remaining_delay_in_millis", "remaining_delay",
TimeValue.timeValueMillis(remainingDelayInMillis));
}
nodeDecisionsToXContent(nodeDecisions, builder, params);
return builder;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
import org.elasticsearch.common.io.Streams;
import org.elasticsearch.common.io.stream.BytesStreamOutput;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.xcontent.XContentFactory;
import org.elasticsearch.common.xcontent.XContentHelper;
import org.elasticsearch.common.xcontent.XContentType;

import java.io.IOException;
Expand All @@ -44,11 +44,11 @@ public static Compressor compressor(BytesReference bytes) {
// bytes should be either detected as compressed or as xcontent,
// if we have bytes that can be either detected as compressed or
// as a xcontent, we have a problem
assert XContentFactory.xContentType(bytes) == null;
assert XContentHelper.xContentType(bytes) == null;
return COMPRESSOR;
}

XContentType contentType = XContentFactory.xContentType(bytes);
XContentType contentType = XContentHelper.xContentType(bytes);
if (contentType == null) {
if (isAncient(bytes)) {
throw new IllegalStateException("unsupported compression: index was created before v2.0.0.beta1 and wasn't upgraded?");
Expand Down
Loading

0 comments on commit 0a63e21

Please sign in to comment.