-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
esArchiver datastream support #132853
esArchiver datastream support #132853
Conversation
@elasticmachine merge upstream |
…-ref HEAD~1..HEAD --fix'
Pinging @elastic/kibana-operations (Team:Operations) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thank you so much for getting this in!
// if keepIndexNames is false, rewrite the .kibana_* index to .kibana_1 so that | ||
// when it is loaded it can skip migration, if possible | ||
index: | ||
hit._index.startsWith('.kibana') && !keepIndexNames ? '.kibana_1' : hit._index, | ||
data_stream: dataStream, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Part of me would prefer that docs either had an index or a data_stream, but I'm not opposed to keeping the index if there's some use for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mainly kept it for traceability when debugging or inspecting archived data, besides that there's no real use for it :)
@elasticmachine merge upstream |
💚 Build SucceededMetrics [docs]
History
To update your PR or re-run it, just comment with: cc @klacabane |
Friendly reminder: Looks like this PR hasn’t been backported yet. |
* aliases fallback * nasty datastream support implementation * datastreams stats method * update filter stream * datastream support for unload action * create-index datastream support * index records data stream support * doc records data streams support * [CI] Auto-commit changed files from 'node scripts/eslint --no-cache --fix' * lint * pull composable templates * set data_stream as a separate property on documents * force create bulk operation when datastream record * [CI] Auto-commit changed files from 'node scripts/eslint --no-cache --fix' * lint * getIndexTemplate tests * [CI] Auto-commit changed files from 'node scripts/precommit_hook.js --ref HEAD~1..HEAD --fix' * share cache across transform executions Co-authored-by: kibanamachine <[email protected]> (cherry picked from commit 4c4f0f5)
💚 All backports created successfully
Note: Successful backport PRs will be merged automatically after passing CI. Questions ?Please refer to the Backport tool documentation |
* aliases fallback * nasty datastream support implementation * datastreams stats method * update filter stream * datastream support for unload action * create-index datastream support * index records data stream support * doc records data streams support * [CI] Auto-commit changed files from 'node scripts/eslint --no-cache --fix' * lint * pull composable templates * set data_stream as a separate property on documents * force create bulk operation when datastream record * [CI] Auto-commit changed files from 'node scripts/eslint --no-cache --fix' * lint * getIndexTemplate tests * [CI] Auto-commit changed files from 'node scripts/precommit_hook.js --ref HEAD~1..HEAD --fix' * share cache across transform executions Co-authored-by: kibanamachine <[email protected]> (cherry picked from commit 4c4f0f5) Co-authored-by: Kevin Lacabane <[email protected]>
Summary
Fixes #69061
Adds support for archiving/loading/unloading data streams.
When archiving indices we now verify whether we have a backing index of a data stream. When that's the case we build the data stream's backing index template resolving any component templates link, and save it as a
data_stream
record type in themappings.json
file:While multiple backing indices of the same data stream can be returned we'll only output a single entry containing the latest mappings and settings.
The documents associated with a data stream keep the same
doc
type but have an additionaldata_stream
property used at load time to index it to the appropriate target (we can't directly write to a backing indice) and to pick the correct BULK operation (data streams can only use create).Note:
The index template could have an ILM policy that we currently don't save or create when loading the archive. We can add it if they are uses case that could benefit from that
Testing
Manual steps
node scripts/es_archiver.js save ~/my-data-stream my-data-stream --es-url=http://elastic:changeme@localhost:9200 --kibana-url=http://elastic:changeme@localhost:5601/pat
node scripts/es_archiver.js load ~/my-data-stream --es-url=http://elastic:changeme@localhost:9200 --kibana-url=http://elastic:changeme@localhost:5601/pat
node scripts/es_archiver.js load ~/my-data-stream --es-url=http://elastic:changeme@localhost:9200 --kibana-url=http://elastic:changeme@localhost:5601/pat