Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial data stream commit #53666

Merged
merged 15 commits into from
Mar 20, 2020
Merged
Show file tree
Hide file tree
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -790,7 +790,10 @@ public void testApiNamingConventions() throws Exception {
"indices.get_upgrade",
"indices.put_alias",
"render_search_template",
"scripts_painless_execute"
"scripts_painless_execute",
"cluster.create_data_stream",
"cluster.get_data_streams",
"cluster.delete_data_stream"
};
//These API are not required for high-level client feature completeness
String[] notRequiredApi = new String[] {
Expand Down
7 changes: 7 additions & 0 deletions distribution/archives/integ-test-zip/build.gradle
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
import org.elasticsearch.gradle.info.BuildParams
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
Expand Down Expand Up @@ -32,3 +33,9 @@ integTest.runner {
systemProperty 'tests.logfile', '--external--'
}
}

testClusters.integTest {
if (BuildParams.isSnapshotBuild() == false) {
systemProperty 'es.datastreams_feature_flag_registered', 'true'
}
}
7 changes: 7 additions & 0 deletions qa/smoke-test-multinode/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
* specific language governing permissions and limitations
* under the License.
*/
import org.elasticsearch.gradle.info.BuildParams

apply plugin: 'elasticsearch.testclusters'
apply plugin: 'elasticsearch.standalone-rest-test'
Expand Down Expand Up @@ -46,3 +47,9 @@ integTest.runner {
].join(',')
}
}

testClusters.integTest {
if (BuildParams.isSnapshotBuild() == false) {
systemProperty 'es.datastreams_feature_flag_registered', 'true'
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
{
"cluster.create_data_stream":{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if this name should be: data_streams.create to be consistent with indices.create?

Other cluster operations seems to be mostly about the cluster state and cluster settings.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After chatting with Henning, I also came to the conclusion that data stream apis should be concidere indices based operations. (Ideally we should have a data_stream notion, but we need to think more about this and how this would work in security). So I will revert the commit in this pr that changes data stream crud apis back to indices based apis.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the component template apis are cluster operations, I suspect those may need to be changed also, can you explain the reasoning and how it will relate to security?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just like indices based apis, data stream apis targets a namespace. So someone may be allowed to create a data stream for logs-* but not for events-*.

Ideally there should be a data stream privilege that handles this correctly, because data streams aren't indices and a data stream may have indices that don't share the data stream name as common prefix. But for now let's stick with indices: based action names and group the apis in the indices client, until we a better there is a better understanding how security and data streams should integrate.

I think component templates should remain cluster based apis. The reason is that these resources don't apply to a namespace. The index template (based on the specified pattern) that use component templates should be an index based action/operation, since they apply the an index namespace and soon also to a data stream namespace.

However currently index templates are treated as cluster privilege (see ClusterPrivilegeResolver line 196 in master), even though the action names start with indices: prefix. With data streams we should properly fix this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I originally had component template APIs using indices: to match the existing template stuff, but I did end up changing it because that requires the request to provide the indices that it is going to apply to, and component templates don't apply to indices. +1 to properly fix the template exception with data streams.

"documentation":{
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/master/data-streams.html",
"description":"Creates or updates a data stream"
},
"stability":"experimental",
"url":{
"paths":[
{
"path":"/_data_stream/{name}",
"methods":[
"PUT"
],
"parts":{
"name":{
"type":"string",
"description":"The name of the data stream"
}
}
}
]
},
"params":{
},
"body":{
"description":"The data stream definition",
"required":true
}
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
{
"cluster.delete_data_stream":{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

data_streams.delete?

"documentation":{
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/master/data-streams.html",
"description":"Deletes a data stream."
},
"stability":"experimental",
"url":{
"paths":[
{
"path":"/_data_stream/{name}",
"methods":[
"DELETE"
],
"parts":{
"name":{
"type":"string",
"description":"The name of the data stream"
}
}
}
]
},
"params":{}
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
{
"cluster.get_data_streams":{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

data_streams.get?

"documentation":{
"url":"https://www.elastic.co/guide/en/elasticsearch/reference/master/data-streams.html",
"description":"Returns data streams."
},
"stability":"experimental",
"url":{
"paths":[
{
"path":"/_data_streams",
"methods":[
"GET"
]
},
{
"path":"/_data_streams/{name}",
"methods":[
"GET"
],
"parts":{
"name":{
"type":"list",
"description":"The comma separated names of data streams"
}
}
}
]
},
"params":{
}
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
---
"Test stubs":
- skip:
version: " - 7.99.99"
reason: not backported yet

- do:
cluster.create_data_stream:
name: data-stream2
body:
timestamp_field_name: "@timestamp"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think most places where we ask for a field, we use just "field" and not "field_name". I did not search all usages, but I think this should be just timestamp_field.

- is_true: acknowledged

- do:
cluster.get_data_streams: {}
- match: { 0.name: my_data_stream1 }
- match: { 0.timestamp_field_name: '@timestamp' }
- match: { 0.indices: ['my_data_stream1-000000'] }
- match: { 1.name: my_data_stream2 }
- match: { 1.timestamp_field_name: '@timestamp' }
- match: { 1.indices: [] }

- do:
cluster.delete_data_stream:
name: data-stream2
- is_true: acknowledged
39 changes: 39 additions & 0 deletions server/src/main/java/org/elasticsearch/action/ActionModule.java
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,16 @@

import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
import org.elasticsearch.Build;
import org.elasticsearch.action.admin.cluster.allocation.ClusterAllocationExplainAction;
import org.elasticsearch.action.admin.cluster.allocation.TransportClusterAllocationExplainAction;
import org.elasticsearch.action.admin.cluster.configuration.AddVotingConfigExclusionsAction;
import org.elasticsearch.action.admin.cluster.configuration.ClearVotingConfigExclusionsAction;
import org.elasticsearch.action.admin.cluster.configuration.TransportAddVotingConfigExclusionsAction;
import org.elasticsearch.action.admin.cluster.configuration.TransportClearVotingConfigExclusionsAction;
import org.elasticsearch.action.admin.cluster.datastream.DeleteDataStreamAction;
import org.elasticsearch.action.admin.cluster.datastream.GetDataStreamsAction;
import org.elasticsearch.action.admin.cluster.datastream.CreateDataStreamAction;
import org.elasticsearch.action.admin.cluster.health.ClusterHealthAction;
import org.elasticsearch.action.admin.cluster.health.TransportClusterHealthAction;
import org.elasticsearch.action.admin.cluster.node.hotthreads.NodesHotThreadsAction;
Expand Down Expand Up @@ -243,9 +247,11 @@
import org.elasticsearch.rest.action.admin.cluster.RestClusterStatsAction;
import org.elasticsearch.rest.action.admin.cluster.RestClusterUpdateSettingsAction;
import org.elasticsearch.rest.action.admin.cluster.RestCreateSnapshotAction;
import org.elasticsearch.rest.action.admin.cluster.RestDeleteDataStreamAction;
import org.elasticsearch.rest.action.admin.cluster.RestDeleteRepositoryAction;
import org.elasticsearch.rest.action.admin.cluster.RestDeleteSnapshotAction;
import org.elasticsearch.rest.action.admin.cluster.RestDeleteStoredScriptAction;
import org.elasticsearch.rest.action.admin.cluster.RestGetDataStreamsAction;
import org.elasticsearch.rest.action.admin.cluster.RestGetRepositoriesAction;
import org.elasticsearch.rest.action.admin.cluster.RestGetScriptContextAction;
import org.elasticsearch.rest.action.admin.cluster.RestGetScriptLanguageAction;
Expand All @@ -258,6 +264,7 @@
import org.elasticsearch.rest.action.admin.cluster.RestNodesStatsAction;
import org.elasticsearch.rest.action.admin.cluster.RestNodesUsageAction;
import org.elasticsearch.rest.action.admin.cluster.RestPendingClusterTasksAction;
import org.elasticsearch.rest.action.admin.cluster.RestCreateDataStreamAction;
import org.elasticsearch.rest.action.admin.cluster.RestPutRepositoryAction;
import org.elasticsearch.rest.action.admin.cluster.RestPutStoredScriptAction;
import org.elasticsearch.rest.action.admin.cluster.RestReloadSecureSettingsAction;
Expand Down Expand Up @@ -361,6 +368,24 @@ public class ActionModule extends AbstractModule {

private static final Logger logger = LogManager.getLogger(ActionModule.class);

private static final boolean DATASTREAMS_FEATURE_FLAG_REGISTERED;

static {
final String property = System.getProperty("es.datastreams_feature_flag_registered");
if (Build.CURRENT.isSnapshot() && property != null) {
throw new IllegalArgumentException("es.datastreams_feature_flag_registered is only supported in non-snapshot builds");
}
if (Build.CURRENT.isSnapshot() || "true".equals(property)) {
DATASTREAMS_FEATURE_FLAG_REGISTERED = true;
} else if ("false".equals(property) || property == null) {
DATASTREAMS_FEATURE_FLAG_REGISTERED = false;
} else {
throw new IllegalArgumentException(
"expected es.datastreams_feature_flag_registered to be unset or [true|false] but was [" + property + "]"
);
}
}

private final Settings settings;
private final IndexNameExpressionResolver indexNameExpressionResolver;
private final IndexScopedSettings indexScopedSettings;
Expand Down Expand Up @@ -533,6 +558,13 @@ public <Request extends ActionRequest, Response extends ActionResponse> void reg

actionPlugins.stream().flatMap(p -> p.getActions().stream()).forEach(actions::register);

// Data streams:
if (DATASTREAMS_FEATURE_FLAG_REGISTERED) {
actions.register(CreateDataStreamAction.INSTANCE, CreateDataStreamAction.TransportAction.class);
actions.register(DeleteDataStreamAction.INSTANCE, DeleteDataStreamAction.TransportAction.class);
actions.register(GetDataStreamsAction.INSTANCE, GetDataStreamsAction.TransportAction.class);
}

// Persistent tasks:
actions.register(StartPersistentTaskAction.INSTANCE, StartPersistentTaskAction.TransportAction.class);
actions.register(UpdatePersistentTaskStatusAction.INSTANCE, UpdatePersistentTaskStatusAction.TransportAction.class);
Expand Down Expand Up @@ -680,6 +712,13 @@ public void initRestHandlers(Supplier<DiscoveryNodes> nodesInCluster) {
registerHandler.accept(new RestDeletePipelineAction());
registerHandler.accept(new RestSimulatePipelineAction());

// Data Stream API
if (DATASTREAMS_FEATURE_FLAG_REGISTERED) {
registerHandler.accept(new RestCreateDataStreamAction());
registerHandler.accept(new RestDeleteDataStreamAction());
registerHandler.accept(new RestGetDataStreamsAction());
}

// CAT API
registerHandler.accept(new RestAllocationAction());
registerHandler.accept(new RestShardsAction());
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
/*
* Licensed to Elasticsearch under one or more contributor
* license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright
* ownership. Elasticsearch licenses this file to you under
* the Apache License, Version 2.0 (the "License"); you may
* not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.elasticsearch.action.admin.cluster.datastream;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should move this outside the cluster package? So datastream is a sibling to indices.

We might also want to follow the same substructure by adding create package now? Probably easiest to split into subpackages early.


import org.elasticsearch.action.ActionListener;
import org.elasticsearch.action.ActionRequestValidationException;
import org.elasticsearch.action.ActionType;
import org.elasticsearch.action.support.ActionFilters;
import org.elasticsearch.action.support.master.AcknowledgedResponse;
import org.elasticsearch.action.support.master.MasterNodeRequest;
import org.elasticsearch.action.support.master.TransportMasterNodeAction;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.block.ClusterBlockLevel;
import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
import org.elasticsearch.cluster.service.ClusterService;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.common.io.stream.StreamOutput;
import org.elasticsearch.tasks.Task;
import org.elasticsearch.threadpool.ThreadPool;
import org.elasticsearch.transport.TransportService;

import java.io.IOException;
import java.util.Objects;

public class CreateDataStreamAction extends ActionType<AcknowledgedResponse> {

public static final CreateDataStreamAction INSTANCE = new CreateDataStreamAction();
public static final String NAME = "cluster:admin/data_stream/create";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am in doubt if we should not add a specific prefix here? No need to look at this now, we can pick that up later.


private CreateDataStreamAction() {
super(NAME, AcknowledgedResponse::new);
}

public static class Request extends MasterNodeRequest<Request> {

private final String name;
private String timestampFieldName;

public Request(String name) {
this.name = name;
}

public void setTimestampFieldName(String timestampFieldName) {
this.timestampFieldName = timestampFieldName;
}

@Override
public ActionRequestValidationException validate() {
return null;
}

public Request(StreamInput in) throws IOException {
super(in);
this.name = in.readString();
this.timestampFieldName = in.readString();
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to read the optional indices field here for creating a data stream with existing indices?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now this api will only create new empty indices.
When we add the ability to add existing indices to a data stream then we can add an optional indices field here. Makes sense?


@Override
public void writeTo(StreamOutput out) throws IOException {
super.writeTo(out);
out.writeString(name);
out.writeString(timestampFieldName);
}

@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
Request request = (Request) o;
return name.equals(request.name) &&
timestampFieldName.equals(request.timestampFieldName);
}

@Override
public int hashCode() {
return Objects.hash(name, timestampFieldName);
}
}

public static class TransportAction extends TransportMasterNodeAction<Request, AcknowledgedResponse> {

@Inject
public TransportAction(TransportService transportService, ClusterService clusterService, ThreadPool threadPool,
ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver) {
super(NAME, transportService, clusterService, threadPool, actionFilters, Request::new, indexNameExpressionResolver);
}

@Override
protected String executor() {
return ThreadPool.Names.SAME;
}

@Override
protected AcknowledgedResponse read(StreamInput in) throws IOException {
return new AcknowledgedResponse(in);
}

@Override
protected void masterOperation(Task task, Request request, ClusterState state,
ActionListener<AcknowledgedResponse> listener) throws Exception {
listener.onResponse(new AcknowledgedResponse(true));
}

@Override
protected ClusterBlockException checkBlock(Request request, ClusterState state) {
return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE);
}
}

}
Loading