Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Used SO for saving the API key IDs that should be deleted #82211

Merged

Conversation

YulNaumenko
Copy link
Contributor

@YulNaumenko YulNaumenko commented Oct 30, 2020

Used SO for saving the API key IDs that should be deleted and create a configuration option where can set an execution interval for a TM task which will get the data from this SO and remove marked for delete keys.
Resolve #53868

Note for docs

Two new configuration optons:

  • xpack.alerts.invalidateApiKeysTask.interval: '5m'
  • xpack.alerts.invalidateApiKeysTask.removalDelay: '5m'

…a configuration option where can set an execution interval for a TM task which will get the data from this SO and remove marked for delete keys.
@YulNaumenko YulNaumenko self-assigned this Oct 30, 2020
…-api-keys-using-task

# Please enter a commit message to explain why this merge is necessary,
# especially if it merges an updated upstream into a topic branch.
#
# Lines starting with '#' will be ignored, and an empty message aborts
# the commit.
@YulNaumenko YulNaumenko marked this pull request as ready for review November 4, 2020 18:28
@YulNaumenko YulNaumenko requested a review from a team as a code owner November 4, 2020 18:29
@YulNaumenko YulNaumenko added Feature:Alerting Team:ResponseOps Label for the ResponseOps team (formerly the Cases and Alerting teams) labels Nov 4, 2020
@elasticmachine
Copy link
Contributor

Pinging @elastic/kibana-alerting-services (Team:Alerting Services)

@YulNaumenko YulNaumenko added release_note:skip Skip the PR/issue when compiling release notes v7.11.0 v8.0.0 labels Nov 4, 2020
@mikecote mikecote self-requested a review November 6, 2020 11:46
Copy link
Contributor

@mikecote mikecote left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feature is going to be sooo great to have! We've had a lot of flaky tests because of this and added a lot of code to work around it. This will allow us to clean some of that up in the future ❤️ .

I have a bunch of optional nits, questions and comments. I will do a final pass on the tests and functionality later (in case some core changes are made).

…-api-keys-using-task

# Please enter a commit message to explain why this merge is necessary,
# especially if it merges an updated upstream into a topic branch.
#
# Lines starting with '#' will be ignored, and an empty message aborts
# the commit.
@YulNaumenko YulNaumenko requested a review from mikecote November 9, 2020 05:44
Copy link
Contributor

@mikecote mikecote left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes LGTM! 👍 Just one thing on the hasApiKeysPendingInvalidation variable, the for loop for invalidating keys and a few nits.

…-api-keys-using-task

# Conflicts:
#	x-pack/test/alerting_api_integration/security_and_spaces/tests/alerting/update.ts
Copy link
Member

@pmuellr pmuellr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but left some questions and comments. I'm curious about the api key ids, and whether they need to be encrypted. Also noted we should get someone in security to review the overall approach.

x-pack/plugins/alerts/server/config.ts Show resolved Hide resolved
if (!apiKey) {
return;
}
const apiKeyId = Buffer.from(apiKey, 'base64').toString().split(':')[0];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Huh - it looks like the security plugin expects the api key id to be passed on deletion, I'm surprised to find that the id is a base64 encoding of the key itself! Seems like this code could be fragile if that changed, but perhaps we aren't otherwise storing the id separately because they are related like this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And just thought, if this is the apiKey itself, but in base64, then we really need to use encrypted saved objects here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And we should have someone from security review

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The last piece is the important one .split(':')[0];. An API key is a combination of id and key, separated by : and encoded in base64. So this code does the reverse to get the id only.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ahhhh! Seems comment-worthy then, won't scare me next time I see it :-)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should be encrypting this, even though we're only storing the id, which isn't the "secret" part. Someone with access to the .kibana index shouldn't be able to enumerate these keys, even if they're going to be invalidated in the near future.

Access to API Key information is currently governed by the manage_api_keys and manage_own_api_keys cluster privileges, and storing the ids in plaintext feels like we'd be violating this expectation.

Additional nit: Should we move this line inside the try block to prevent this from throwing an error if the API Key is somehow malformed?

x-pack/test/alerting_api_integration/common/config.ts Outdated Show resolved Hide resolved
securityPluginSetup?: SecurityPluginSetup
) {
let totalInvalidated = 0;
await Promise.all(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this means we sending all the invalidation requests at once? I'm worried about overwhelming ES here - should be breaking this into something like 10 or 20 to run at once, as a set of batches?

We should also ask security about a batch version of invalidating these.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should have a debug log call here. Probably just one for all the keys, just printing totalInvalidated, at the end.

Copy link
Contributor Author

@YulNaumenko YulNaumenko Nov 10, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Security team has an opened issue for the batch invalidation by Ids array #79714, so I will open a follow up issue and add the dependency on it

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer to chunk this out as well, regardless of #79714. Using Promise.all with a large input array will negatively impact the Kibana server's performance.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't it currently restricted by the page size of 100 in the const apiKeysToInvalidate = await savedObjectsClient.find<InvalidatePendingApiKey>? I think having 100 maximum won't impact Kibana so hard.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think having 100 maximum won't impact Kibana so hard.

I would hope not, and in practice, you're probably right. We haven't benchmarked this one way or another to know for sure, but we all have to keep in mind that the Kibana server is a shared resource, where any number of operations could be happening simultaneously. The starting tier on ESS only provides a single GB of ram, which is a fairly constrained environment considering everything that Kibana is capable of doing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may reduce the number in twice - make it 50 :-)

…-api-keys-using-task

# Please enter a commit message to explain why this merge is necessary,
# especially if it merges an updated upstream into a topic branch.
#
# Lines starting with '#' will be ignored, and an empty message aborts
# the commit.
if (!apiKey) {
return;
}
const apiKeyId = Buffer.from(apiKey, 'base64').toString().split(':')[0];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should be encrypting this, even though we're only storing the id, which isn't the "secret" part. Someone with access to the .kibana index shouldn't be able to enumerate these keys, even if they're going to be invalidated in the near future.

Access to API Key information is currently governed by the manage_api_keys and manage_own_api_keys cluster privileges, and storing the ids in plaintext feels like we'd be violating this expectation.

Additional nit: Should we move this line inside the try block to prevent this from throwing an error if the API Key is somehow malformed?

securityPluginSetup?: SecurityPluginSetup
) {
let totalInvalidated = 0;
await Promise.all(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer to chunk this out as well, regardless of #79714. Using Promise.all with a large input array will negatively impact the Kibana server's performance.

x-pack/plugins/alerts/server/saved_objects/index.ts Outdated Show resolved Hide resolved
@YulNaumenko YulNaumenko requested a review from legrego November 16, 2020 21:08
securityPluginSetup?: SecurityPluginSetup
) {
let totalInvalidated = 0;
await Promise.all(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think having 100 maximum won't impact Kibana so hard.

I would hope not, and in practice, you're probably right. We haven't benchmarked this one way or another to know for sure, but we all have to keep in mind that the Kibana server is a shared resource, where any number of operations could be happening simultaneously. The starting tier on ESS only provides a single GB of ram, which is a fairly constrained environment considering everything that Kibana is capable of doing.

x-pack/plugins/alerts/server/config.ts Show resolved Hide resolved
@YulNaumenko YulNaumenko requested a review from legrego November 17, 2020 02:02
Copy link
Member

@legrego legrego left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM from the security side of things - thanks for the edits

@YulNaumenko YulNaumenko merged commit 8b658fb into elastic:master Nov 17, 2020
YulNaumenko added a commit to YulNaumenko/kibana that referenced this pull request Nov 17, 2020
)

* Used SO for saving the API key IDs that should be deleted and create a configuration option where can set an execution interval for a TM task which will get the data from this SO and remove marked for delete keys.

* removed invalidateApiKey from AlertsClient

* Fixed type checks

* Fixed jest tests

* Removed test code

* Changed SO name

* fixed type cheks

* Moved invalidate logic out of alerts client

* fixed type check

* Added functional tests

* Fixed due to comments

* added configurable delay for invalidation task

* added interval to the task response

* Fixed jest tests

* Fixed due to comments

* Fixed task

* fixed paging

* Fixed date filter

* Fixed jest tests

* fixed due to comments

* fixed due to comments

* Fixed e2e test

* Fixed e2e test

* Fixed due to comments. Changed api key invalidation task to use SavedObjectClient

* Use encryptedSavedObjectClient

* set back flaky test comment
YulNaumenko added a commit that referenced this pull request Nov 17, 2020
…83547)

* Used SO for saving the API key IDs that should be deleted and create a configuration option where can set an execution interval for a TM task which will get the data from this SO and remove marked for delete keys.

* removed invalidateApiKey from AlertsClient

* Fixed type checks

* Fixed jest tests

* Removed test code

* Changed SO name

* fixed type cheks

* Moved invalidate logic out of alerts client

* fixed type check

* Added functional tests

* Fixed due to comments

* added configurable delay for invalidation task

* added interval to the task response

* Fixed jest tests

* Fixed due to comments

* Fixed task

* fixed paging

* Fixed date filter

* Fixed jest tests

* fixed due to comments

* fixed due to comments

* Fixed e2e test

* Fixed e2e test

* Fixed due to comments. Changed api key invalidation task to use SavedObjectClient

* Use encryptedSavedObjectClient

* set back flaky test comment
gmmorris added a commit to gmmorris/kibana that referenced this pull request Nov 17, 2020
* master: (51 commits)
  [ML] Persisted URL state for the Data frame analytics jobs and models pages (elastic#83439)
  adds xpack.security.authc.selector.enabled setting (elastic#83551)
  skip flaky suite (elastic#77279)
  [ML] Improve support for script and aggregation fields in anomaly detection jobs (elastic#81923)
  [Workplace Search] Migrate SourcesLogic from ent-search (elastic#83544)
  [ML] Add UI test for feature importance features (elastic#82677)
  [Maps] Improve icons for all layer types (elastic#83503)
  Replace experimental badge with Beta (elastic#83468)
  [Fleet][EPM] Unified install and archive (elastic#83384)
  Move src/legacy/server/keystore to src/cli (elastic#83483)
  Used SO for saving the API key IDs that should be deleted (elastic#82211)
  [Uptime] Mock implementation to account for math flakiness test (elastic#83535)
  [Workplace Search] Enable check for org context based on URL (elastic#83487)
  [App Search] Added all Document related routes and logic (elastic#83324)
  [Alerting UI] Fix console error when setting connector params (elastic#83333)
  [Discover] Allow custom name for fields via index pattern field management (elastic#70039)
  [Uptime] Fix monitor list down histogram (elastic#83411)
  remove headers timeout hack, rely on nodejs timeouts (elastic#83419)
  [ML] Update console autocomplete for ML data frame evaluate API (elastic#83151)
  [Lens] Color in dimension trigger (elastic#76871)
  ...
@kibanamachine
Copy link
Contributor

kibanamachine commented Nov 24, 2020

💔 Build Failed

Failed CI Steps


Test Failures

Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/maps.maps app "before all" hook in "maps app"

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has failed 3 times on tracked branches: https://github.com/elastic/kibana/issues/50387

[00:00:00]       │
[00:00:00]         └-: maps app
[00:00:00]           └-> "before all" hook
[00:00:00]           └-> "before all" hook
[00:00:00]             │ info [logstash_functional] Loading "mappings.json"
[00:00:00]             │ info [logstash_functional] Loading "data.json.gz"
[00:00:00]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1606175958256289594] failed on parsing mappings on index creation [logstash-2015.09.22]
[00:00:00]             │      org.elasticsearch.index.mapper.MapperParsingException: Failed to parse mapping: No handler for type [runtime] declared on field [runtime_number]
[00:00:00]             │      	at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:308) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:281) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.updateIndexMappingsAndBuildSortOrder(MetadataCreateIndexService.java:915) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.lambda$applyCreateIndexWithTemporaryService$2(MetadataCreateIndexService.java:409) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.indices.IndicesService.withTempIndexService(IndicesService.java:621) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexWithTemporaryService(MetadataCreateIndexService.java:407) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequestWithV1Templates(MetadataCreateIndexService.java:485) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:370) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:377) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.metadata.MetadataCreateIndexService$1.execute(MetadataCreateIndexService.java:300) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:59) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:697) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:319) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:214) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:674) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
[00:00:00]             │      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
[00:00:00]             │      	at java.lang.Thread.run(Thread.java:832) [?:?]
[00:00:00]             │      Caused by: org.elasticsearch.index.mapper.MapperParsingException: No handler for type [runtime] declared on field [runtime_number]
[00:00:00]             │      	at org.elasticsearch.index.mapper.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:311) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.index.mapper.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:232) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.index.mapper.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:150) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:94) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:83) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:306) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	... 22 more
[00:00:00]             │ info Taking screenshot "/dev/shm/workspace/parallel/1/kibana/x-pack/test/functional/screenshots/failure/maps app _before all_ hook.png"
[00:00:00]             │ info Current URL is: data:/,
[00:00:00]             │ info Saving page source to: /dev/shm/workspace/parallel/1/kibana/x-pack/test/functional/failure_debug/html/maps app _before all_ hook.html
[00:00:00]             └- ✖ fail: maps app "before all" hook in "maps app"
[00:00:00]             │      Error: [mapper_parsing_exception] No handler for type [runtime] declared on field [runtime_number]
[00:00:00]             │       at respond (/dev/shm/workspace/kibana/node_modules/elasticsearch/src/lib/transport.js:349:15)
[00:00:00]             │       at checkRespForFailure (/dev/shm/workspace/kibana/node_modules/elasticsearch/src/lib/transport.js:306:7)
[00:00:00]             │       at HttpConnector.<anonymous> (/dev/shm/workspace/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:173:7)
[00:00:00]             │       at IncomingMessage.wrapper (/dev/shm/workspace/kibana/node_modules/lodash/lodash.js:4949:19)
[00:00:00]             │       at endReadableNT (_stream_readable.js:1223:12)
[00:00:00]             │       at processTicksAndRejections (internal/process/task_queues.js:84:21)
[00:00:00]             │ 
[00:00:00]             │ 

Stack Trace

StatusCodeError: [mapper_parsing_exception] No handler for type [runtime] declared on field [runtime_number]
    at respond (/dev/shm/workspace/kibana/node_modules/elasticsearch/src/lib/transport.js:349:15)
    at checkRespForFailure (/dev/shm/workspace/kibana/node_modules/elasticsearch/src/lib/transport.js:306:7)
    at HttpConnector.<anonymous> (/dev/shm/workspace/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:173:7)
    at IncomingMessage.wrapper (/dev/shm/workspace/kibana/node_modules/lodash/lodash.js:4949:19)
    at endReadableNT (_stream_readable.js:1223:12)
    at processTicksAndRejections (internal/process/task_queues.js:84:21) {
  status: 400,
  displayName: 'BadRequest',
  path: '/logstash-2015.09.22',
  query: {},
  body: {
    error: {
      root_cause: [Array],
      type: 'mapper_parsing_exception',
      reason: 'Failed to parse mapping: No handler for type [runtime] declared on field [runtime_number]',
      caused_by: [Object]
    },
    status: 400
  },
  statusCode: 400,
  response: '{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"No handler for type [runtime] declared on field [runtime_number]"}],"type":"mapper_parsing_exception","reason":"Failed to parse mapping: No handler for type [runtime] declared on field [runtime_number]","caused_by":{"type":"mapper_parsing_exception","reason":"No handler for type [runtime] declared on field [runtime_number]"}},"status":400}',
  toString: [Function],
  toJSON: [Function]
}

Chrome X-Pack UI Functional Tests.x-pack/test/functional/apps/maps.maps app "before all" hook in "maps app"

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has failed 3 times on tracked branches: https://github.com/elastic/kibana/issues/50387

[00:00:00]       │
[00:00:00]         └-: maps app
[00:00:00]           └-> "before all" hook
[00:00:00]           └-> "before all" hook
[00:00:00]             │ info [logstash_functional] Loading "mappings.json"
[00:00:00]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1606175958256289594] [.ds-ilm-history-5-000001] creating index, cause [initialize_data_stream], templates [ilm-history], shards [1]/[0]
[00:00:00]             │ info [logstash_functional] Loading "data.json.gz"
[00:00:00]             │ info [o.e.c.m.MetadataCreateDataStreamService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1606175958256289594] adding data stream [ilm-history-5] with write index [.ds-ilm-history-5-000001] and backing indices []
[00:00:00]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1606175958256289594] failed on parsing mappings on index creation [logstash-2015.09.22]
[00:00:00]             │      org.elasticsearch.index.mapper.MapperParsingException: Failed to parse mapping: No handler for type [runtime] declared on field [runtime_number]
[00:00:00]             │      	at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:308) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:281) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.updateIndexMappingsAndBuildSortOrder(MetadataCreateIndexService.java:915) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.lambda$applyCreateIndexWithTemporaryService$2(MetadataCreateIndexService.java:409) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.indices.IndicesService.withTempIndexService(IndicesService.java:621) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexWithTemporaryService(MetadataCreateIndexService.java:407) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequestWithV1Templates(MetadataCreateIndexService.java:485) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:370) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:377) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.metadata.MetadataCreateIndexService$1.execute(MetadataCreateIndexService.java:300) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:59) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:697) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:319) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:214) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:674) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
[00:00:00]             │      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
[00:00:00]             │      	at java.lang.Thread.run(Thread.java:832) [?:?]
[00:00:00]             │      Caused by: org.elasticsearch.index.mapper.MapperParsingException: No handler for type [runtime] declared on field [runtime_number]
[00:00:00]             │      	at org.elasticsearch.index.mapper.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:311) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.index.mapper.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:232) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.index.mapper.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:150) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:94) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:83) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:306) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:00]             │      	... 22 more
[00:00:00]             │ info Taking screenshot "/dev/shm/workspace/parallel/12/kibana/x-pack/test/functional/screenshots/failure/maps app _before all_ hook.png"
[00:00:00]             │ info [o.e.c.r.a.AllocationService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1606175958256289594] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.ds-ilm-history-5-000001][0]]])." previous.health="YELLOW" reason="shards started [[.ds-ilm-history-5-000001][0]]"
[00:00:00]             │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1606175958256289594] moving index [.ds-ilm-history-5-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [ilm-history-ilm-policy]
[00:00:00]             │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1606175958256289594] moving index [.ds-ilm-history-5-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-indexing-complete"}] in policy [ilm-history-ilm-policy]
[00:00:00]             │ info [o.e.x.i.IndexLifecycleTransition] [kibana-ci-immutable-ubuntu-16-tests-xxl-1606175958256289594] moving index [.ds-ilm-history-5-000001] from [{"phase":"hot","action":"unfollow","name":"wait-for-indexing-complete"}] to [{"phase":"hot","action":"unfollow","name":"wait-for-follow-shard-tasks"}] in policy [ilm-history-ilm-policy]
[00:00:00]             │ info Current URL is: data:/,
[00:00:00]             │ info Saving page source to: /dev/shm/workspace/parallel/12/kibana/x-pack/test/functional/failure_debug/html/maps app _before all_ hook.html
[00:00:00]             └- ✖ fail: maps app "before all" hook in "maps app"
[00:00:00]             │      Error: [mapper_parsing_exception] No handler for type [runtime] declared on field [runtime_number]
[00:00:00]             │       at respond (/dev/shm/workspace/kibana/node_modules/elasticsearch/src/lib/transport.js:349:15)
[00:00:00]             │       at checkRespForFailure (/dev/shm/workspace/kibana/node_modules/elasticsearch/src/lib/transport.js:306:7)
[00:00:00]             │       at HttpConnector.<anonymous> (/dev/shm/workspace/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:173:7)
[00:00:00]             │       at IncomingMessage.wrapper (/dev/shm/workspace/kibana/node_modules/lodash/lodash.js:4949:19)
[00:00:00]             │       at endReadableNT (_stream_readable.js:1223:12)
[00:00:00]             │       at processTicksAndRejections (internal/process/task_queues.js:84:21)
[00:00:00]             │ 
[00:00:00]             │ 

Stack Trace

StatusCodeError: [mapper_parsing_exception] No handler for type [runtime] declared on field [runtime_number]
    at respond (/dev/shm/workspace/kibana/node_modules/elasticsearch/src/lib/transport.js:349:15)
    at checkRespForFailure (/dev/shm/workspace/kibana/node_modules/elasticsearch/src/lib/transport.js:306:7)
    at HttpConnector.<anonymous> (/dev/shm/workspace/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:173:7)
    at IncomingMessage.wrapper (/dev/shm/workspace/kibana/node_modules/lodash/lodash.js:4949:19)
    at endReadableNT (_stream_readable.js:1223:12)
    at processTicksAndRejections (internal/process/task_queues.js:84:21) {
  status: 400,
  displayName: 'BadRequest',
  path: '/logstash-2015.09.22',
  query: {},
  body: {
    error: {
      root_cause: [Array],
      type: 'mapper_parsing_exception',
      reason: 'Failed to parse mapping: No handler for type [runtime] declared on field [runtime_number]',
      caused_by: [Object]
    },
    status: 400
  },
  statusCode: 400,
  response: '{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"No handler for type [runtime] declared on field [runtime_number]"}],"type":"mapper_parsing_exception","reason":"Failed to parse mapping: No handler for type [runtime] declared on field [runtime_number]","caused_by":{"type":"mapper_parsing_exception","reason":"No handler for type [runtime] declared on field [runtime_number]"}},"status":400}',
  toString: [Function],
  toJSON: [Function]
}

Firefox XPack UI Functional Tests.x-pack/test/functional/apps/canvas.Canvas app "before all" hook in "Canvas app"

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has failed 1 times on tracked branches: https://dryrun

[00:00:00]       │
[00:00:00]         └-: Canvas app
[00:00:00]           └-> "before all" hook
[00:00:00]           └-> "before all" hook
[00:00:00]             │ debg set roles = test_logstash_reader,global_canvas_all
[00:00:00]             │ debg creating user test_user
[00:00:00]             │ info [o.e.x.s.a.u.TransportPutUserAction] [kibana-ci-immutable-ubuntu-16-tests-xxl-1606175958256289594] updated user [test_user]
[00:00:00]             │ debg created user test_user
[00:00:00]             │ debg TestSubjects.exists(kibanaChrome)
[00:00:00]             │ debg Find.existsByCssSelector('[data-test-subj="kibanaChrome"]') with timeout=2500
[00:00:02]             │ info [logstash_functional] Loading "mappings.json"
[00:00:02]             │ info [logstash_functional] Loading "data.json.gz"
[00:00:02]             │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1606175958256289594] failed on parsing mappings on index creation [logstash-2015.09.22]
[00:00:02]             │      org.elasticsearch.index.mapper.MapperParsingException: Failed to parse mapping: No handler for type [runtime] declared on field [runtime_number]
[00:00:02]             │      	at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:308) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:281) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.updateIndexMappingsAndBuildSortOrder(MetadataCreateIndexService.java:915) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.lambda$applyCreateIndexWithTemporaryService$2(MetadataCreateIndexService.java:409) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.indices.IndicesService.withTempIndexService(IndicesService.java:621) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexWithTemporaryService(MetadataCreateIndexService.java:407) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequestWithV1Templates(MetadataCreateIndexService.java:485) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:370) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.cluster.metadata.MetadataCreateIndexService.applyCreateIndexRequest(MetadataCreateIndexService.java:377) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.cluster.metadata.MetadataCreateIndexService$1.execute(MetadataCreateIndexService.java:300) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:59) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:697) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:319) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:214) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:151) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:674) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215) [elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
[00:00:02]             │      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
[00:00:02]             │      	at java.lang.Thread.run(Thread.java:832) [?:?]
[00:00:02]             │      Caused by: org.elasticsearch.index.mapper.MapperParsingException: No handler for type [runtime] declared on field [runtime_number]
[00:00:02]             │      	at org.elasticsearch.index.mapper.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:311) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.index.mapper.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:232) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.index.mapper.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:150) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:94) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:83) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	at org.elasticsearch.index.mapper.MapperService.internalMerge(MapperService.java:306) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:00:02]             │      	... 22 more
[00:00:02]             │ info Taking screenshot "/dev/shm/workspace/parallel/14/kibana/x-pack/test/functional/screenshots/failure/Canvas app _before all_ hook.png"
[00:00:02]             │ info Current URL is: about:blank
[00:00:02]             │ info Saving page source to: /dev/shm/workspace/parallel/14/kibana/x-pack/test/functional/failure_debug/html/Canvas app _before all_ hook.html
[00:00:02]             └- ✖ fail: Canvas app "before all" hook in "Canvas app"
[00:00:02]             │      Error: [mapper_parsing_exception] No handler for type [runtime] declared on field [runtime_number]
[00:00:02]             │       at respond (/dev/shm/workspace/kibana/node_modules/elasticsearch/src/lib/transport.js:349:15)
[00:00:02]             │       at checkRespForFailure (/dev/shm/workspace/kibana/node_modules/elasticsearch/src/lib/transport.js:306:7)
[00:00:02]             │       at HttpConnector.<anonymous> (/dev/shm/workspace/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:173:7)
[00:00:02]             │       at IncomingMessage.wrapper (/dev/shm/workspace/kibana/node_modules/lodash/lodash.js:4949:19)
[00:00:02]             │       at endReadableNT (_stream_readable.js:1223:12)
[00:00:02]             │       at processTicksAndRejections (internal/process/task_queues.js:84:21)
[00:00:02]             │ 
[00:00:02]             │ 

Stack Trace

StatusCodeError: [mapper_parsing_exception] No handler for type [runtime] declared on field [runtime_number]
    at respond (/dev/shm/workspace/kibana/node_modules/elasticsearch/src/lib/transport.js:349:15)
    at checkRespForFailure (/dev/shm/workspace/kibana/node_modules/elasticsearch/src/lib/transport.js:306:7)
    at HttpConnector.<anonymous> (/dev/shm/workspace/kibana/node_modules/elasticsearch/src/lib/connectors/http.js:173:7)
    at IncomingMessage.wrapper (/dev/shm/workspace/kibana/node_modules/lodash/lodash.js:4949:19)
    at endReadableNT (_stream_readable.js:1223:12)
    at processTicksAndRejections (internal/process/task_queues.js:84:21) {
  status: 400,
  displayName: 'BadRequest',
  path: '/logstash-2015.09.22',
  query: {},
  body: {
    error: {
      root_cause: [Array],
      type: 'mapper_parsing_exception',
      reason: 'Failed to parse mapping: No handler for type [runtime] declared on field [runtime_number]',
      caused_by: [Object]
    },
    status: 400
  },
  statusCode: 400,
  response: '{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"No handler for type [runtime] declared on field [runtime_number]"}],"type":"mapper_parsing_exception","reason":"Failed to parse mapping: No handler for type [runtime] declared on field [runtime_number]","caused_by":{"type":"mapper_parsing_exception","reason":"No handler for type [runtime] declared on field [runtime_number]"}},"status":400}',
  toString: [Function],
  toJSON: [Function]
}

and 9 more failures, only showing the first 3.

Metrics [docs]

Distributable file count

id before after diff
default 42835 42838 +3

Saved Objects .kibana field count

Every field in each saved object type adds overhead to Elasticsearch. Kibana needs to keep the total field count below Elasticsearch's default limit of 1000 fields. Only specify field mappings for the fields you wish to search on or query. See https://www.elastic.co/guide/en/kibana/master/development-plugin-saved-objects.html#_mappings

id before after diff
api_key_pending_invalidation - 3 +3

History

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature:Alerting needs_docs release_note:fix Team:ResponseOps Label for the ResponseOps team (formerly the Cases and Alerting teams) v7.11.0 v8.0.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Advanced API Key invalidation
6 participants