-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow relocating SO to different indices during migration #154151
Allow relocating SO to different indices during migration #154151
Conversation
Pinging @elastic/kibana-core (Team:Core) |
packages/core/saved-objects/core-saved-objects-migration-server-internal/src/initial_state.ts
Show resolved
Hide resolved
@@ -118,7 +118,7 @@ describe('migration v2', () => { | |||
await root.preboot(); | |||
await root.setup(); | |||
await expect(root.start()).rejects.toMatchInlineSnapshot( | |||
`[Error: Unable to complete saved object migrations for the [.kibana] index: The document with _id "canvas-workpad-template:workpad-template-061d7868-2b4e-4dc8-8bf7-3772b52926e5" is 1715248 bytes which exceeds the configured maximum batch size of 1015275 bytes. To proceed, please increase the 'migrations.maxBatchSizeBytes' Kibana configuration option and ensure that the Elasticsearch 'http.max_content_length' configuration option is set to an equal or larger value.]` | |||
`[Error: Unable to complete saved object migrations for the [.kibana] index: The document with _id "canvas-workpad-template:workpad-template-061d7868-2b4e-4dc8-8bf7-3772b52926e5" is 1715286 bytes which exceeds the configured maximum batch size of 1015275 bytes. To proceed, please increase the 'migrations.maxBatchSizeBytes' Kibana configuration option and ensure that the Elasticsearch 'http.max_content_length' configuration option is set to an equal or larger value.]` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Passing-by comment: We really need to stop matching on exact byte size. It's not the first time we've had to update the snapshots based on that change alone.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work @gsoldevila , I am going to take another look but I left a few comments for now (mostly nits). Let me know what you think!
const migrator = new KibanaMigrator(options); | ||
|
||
await expect(() => migrator.runMigrations()).toThrowErrorMatchingInlineSnapshot( | ||
`"Migrations are not ready. Make sure prepareMigrations is called first."` | ||
expect(migrator.runMigrations()).rejects.toThrowError( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious, I thought you would need an await expect...
here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAIK as long as the test returns a Promise, Jest will await for it.
.../core/saved-objects/core-saved-objects-migration-server-internal/src/kibana_migrator.test.ts
Outdated
Show resolved
Hide resolved
...core/saved-objects/core-saved-objects-migration-server-internal/src/kibana_migrator_utils.ts
Outdated
Show resolved
Hide resolved
packages/core/saved-objects/core-saved-objects-server/src/saved_objects_index_pattern.ts
Show resolved
Hide resolved
|
||
// build a list of all migrators that must be started | ||
const migratorIndices = new Set(Object.keys(indexMap)); | ||
// indices involved in a relocation might no longer be present in current mappings |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"current mappings": feels like a very loaded term 😄
I did not realise that we were planning on supporting index split as a more general concept (seems to add more complexity). My understanding is that we are currently planning this as a one-off split: .kibana
and .kibana_task_manager
to some bigger set of new indices including .kibana
and .kibana_task_manager
.
Stated differently I thought we are only introducing functionality now for moving documents to new (non-existent indices). Does it simplify this implementation/change to think about the problem that way? Can we introduce more general/sophisticated capabilities later?
Edit: I am guessing we won't have a deadlock because we will always reindex to temp then to a new index for these kinds of moves.
Another thought, in this more generalised version of moving docs to other indices, it seems possible that we can have a deadlock where:
typeA
inindexA
move toindexB
typeB
inindexB
move toindexA
Unless we are explicitly disallowing this already.
The most likely case I could imagine is if a type wants to move back to .kibana
and some other type wants to move out of .kibana
. But yeah, seems quite unlikely for the foreseeable future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding is that we are currently planning this as a one-off split
That's what I thought too. 8.8.0 is the last time we're going to be reindexing, and further changes like these won't be allowed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I considered that too, but then I realised that the more generic relocation logic is not much more complex:
When upgrading to future versions, e.g. >=8.9.0 we will have 2 possibilities:
- A) Upgrading from a "pre-split" version (<= 8.7.0) where all SO are in our well known indices,
.kibana
and.kibana_task_manager
. - B) Upgrading from a "post-split" version (>= 8.8.0) where we have some SO types relocated to new indices.
Thus, the logic must already take into account more than one scenario. In other words, we need a mechanism to know if some SO types need to be dispatched to other indices or not, and what indices they need to be dispatched to.
This takes us pretty close to the generic implementation already, where we have:
- A picture of the current type => index distribution, stored in the
.kibana.mapping._meta
. - A picture of the desired type => index distribution, calculated from the
typeRegistry
information.
Then, we compare both pictures and know what types must be relocated:
- In scenario A) we will have something like:
'cases', 'cases-comments', ...
must be moved to.kibana_cases
.
- In scenario B) the maps will match, so we will not relocate any types.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding the potential deadlock:
I think it shouldn't be a problem, thanks to the temporary indices + synchronization between migrators.
That's what I thought too. 8.8.0 is the last time we're going to be reindexing, and further changes like these won't be allowed.
Probably not, but who knows what will happen in the future. Having the generic mechanism ready could be handy if in the future we experience field explosion at some point, and we must absolutely reindex to solve it (even if that causes downtime).
packages/core/saved-objects/core-saved-objects-migration-server-internal/src/model/model.ts
Show resolved
Hide resolved
packages/core/saved-objects/core-saved-objects-migration-server-internal/src/model/model.ts
Outdated
Show resolved
Hide resolved
@@ -738,6 +777,32 @@ export const model = (currentState: State, resW: ResponseType<AllActionStates>): | |||
// left responses to handle here. | |||
throwBadResponse(stateP, res); | |||
} | |||
} else if (stateP.controlState === 'READY_TO_REINDEX_SYNC') { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we need to synchronise migrators within a Kibana instance, how does the algo for moving documents to different indices behave when there are multiple Kibanas (where no realtime sync between them exists)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a very good question.
If we consider the current "single" index scenario, we can already have multiple instances trying to reindex into the same temporary index at the same time.
- Some migrators might override other instances work, but eventually they will all finish (no sync required between them).
- Actually, the first migrator that finishes reindexing into the temporary index will write-lock that index and then proceed to clone to the final index. Other migrators that are still bulk indexing will detect the lock and proceed to the next step, also attempting to clone, etc.
Now, regarding the multiple migrators per instance, multiple instances scenario (which BTW will not happen in serverless):
- Instance 1 starts migrators for indices A, B and C. We are moving SO type T from A to B, so A and B need to synchronise their paths.
- Instance 2 does the same thing, a tad later.
- The synchronisation happens twice during the flow:
- Right after the temp indices for A and B have been created.
- Imagine the temp indices are created by 1A and 2B.
- The other migrators (1B, 2A) will eventually attempt to create the temp indices too, detect they already exist and proceed to synchronise (1B will synchronise with 1A, and 2A will sync with 2B).
- No migrator will start reindexing into temp index until all temp indices have been created ✅ .
- Right after the temp indices have been filled, and before locking them.
- Instance 1 will not try to lock temp indices until it is sure that they have all been filled. Same for instance 2.
- Imagine 1A and 1B have both finished, so they synchronise and lock their indices. They then proceed to clone.
- 2A has finished writing to temp but 2B is still bulk indexing, so 2A is waiting for 2B on the synchronisation step.
- 2B will detect the lock and assume another instance (aka 1B) has completed indexing to temp. ✅
- 2A and 2B will synchronise and attempt to lock, attempt to clone, etc. ✅
- Right after the temp indices for A and B have been created.
TLDR: The important thing about the synchronisation points is making sure that all migrators of an instance reach a certain state before proceeding with the flow. Whether 1A reaches the state cause it has completed the work (or because 2A did it faster) does not make a difference, the important is that the work is done before we proceed.
In the example, we're relocating T from A to B, so the important is that A migrators wait for B migrators to create the B_temp_index
before starting bulk indexing docs into that index. A migrators don't care whether the index is created by B1 or B2.
Pheeeew, that was a good exercise, hope it makes sense!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that does make sense! Thank you for the detailed explanation :D
Your explanation is so useful I think it is worth capturing somewhere in a comment or a MD file :) WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
src/core/server/integration_tests/saved_objects/migrations/group3/actions/actions.test.ts
Show resolved
Hide resolved
src/core/server/integration_tests/saved_objects/migrations/group3/split_kibana_index.test.ts
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall the implementation looks fine to me. Can't find anything really blocking.
My remarks in comments:
defer.resolve(); | ||
return defer.promise | ||
.then(() => Either.right('synchronized_successfully' as const)) | ||
.catch((error) => Either.left({ type: 'sync_failed' as const, error })); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at your implementation of Defer
, I'm not sure to understand what we're doing here.
defer.resolve
will de facto resolve the promise, therefore
- we don't need to chain the promise
- the catch block is useless
To rephrase, how is the snippet different from this one:
return () => {
defer.resolve();
return Promise.resolve(Either.right('synchronized_successfully' as const));
}
I probably missed something obvious, though?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at the tests, I'm afraid I'm starting to understand what's done here. You're rewritting the deferred objects to have their promise
property be decoupled from their resolve
and reject
ones, right?
That's highly misleading, as the concept of Deferred
implies the opposite. I would use another name and format (or even pattern) for our need here.
In go, it's called https://gobyexample.com/waitgroups
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, the promise is updated so that it await for all defers. I'll rename it.
packages/core/saved-objects/core-saved-objects-migration-server-internal/src/initial_state.ts
Show resolved
Hide resolved
@@ -84,6 +87,7 @@ export class KibanaMigrator implements IKibanaMigrator { | |||
kibanaVersion, | |||
logger, | |||
docLinks, | |||
defaultIndexTypesMap = DEFAULT_INDEX_TYPES_MAP, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It feels very wrong to have a default value for this? I assume it's to ease testing, but it's somewhat very error prone ihmo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair point.
We can proceed to use the current implementation as a baseline for dot-kibana-split
branch if you are okay with that.
Then, we will get rid of this parameter when we switch to the alternative strategy of using migrationMappingsPropertyHashes
to build the indexTypesMap
.
packages/core/saved-objects/core-saved-objects-migration-server-internal/src/kibana_migrator.ts
Show resolved
Hide resolved
packages/core/saved-objects/core-saved-objects-migration-server-internal/src/kibana_migrator.ts
Outdated
Show resolved
Hide resolved
logger.debug(`The ${mainIndex} index do NOT exist. Assuming this is a fresh deployment`); | ||
return undefined; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if, for example, the .kibana
index was deleted (manually) but not the other ones. Could it be an issue in any way? It's probably nitpicking, but just trying to figure out the impacts on those edge cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm afraid this is an issue today already.
We have ES archiver, and also some tests, that manually delete the .kibana
index. I've learnt that the hard way in my draft PR (where I split cases into a separate index).
In fact, the '.kibana'
string is widely hardcoded across the kibana codebase.
My goal with the newly exposed const MAIN_KIBANA_SAVED_OBJECT_INDEX = '.kibana';
was to try and replace those occurrences and centralise this knowledge for better discoverability.
If we manually delete the ONLY .kibana
index and we run migrations afterwards, there's mainly 2 scenarios:
- Pre-split: No problem, as other indices did NOT exist yet.
- Post-split: We might have some SO in indices that have not been removed.
- The logic will assume fresh deployment, and will start 1 migrator for each SO index.
- The
.kibana
migrator will create a fresh new index. - The other migrators (e.g.
.kibana_cases
) will start, notice that the index existed and compare the mappings:- If the mappings match or changes are compatible, the logic will run through the "skip reindex" path and will update them "in place".
- If the mappings changes are NOT compatible, the migration will attempt to reindex:
- If we are in a newer stack version => reindex path.
- If we are in the same stack version as the one the indices were created => FATAL.
- This should not happen in real life (same stack version same mappings).
- Don't know about QA environments. @dokmic ?
- However, this can happen in some tests, using ES archiver, where we import and happily use the $KIBANA_PACKAGE_VERSION to create the current version index, although we don't have current version mappings (which is totally wrong). I believe this was done to skip reindexing on large archives.
- The
- The logic will assume fresh deployment, and will start 1 migrator for each SO index.
packages/core/saved-objects/core-saved-objects-migration-server-internal/src/model/model.ts
Outdated
Show resolved
Hide resolved
client: ElasticsearchClient, | ||
transformRawDocs: TransformRawDocs, | ||
readyToReindex: Defer<void>, | ||
doneReindexing: Defer<void> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NIT: 4 args is a fair number to switch to keyword args instead of list args
client: ElasticsearchClient, | ||
transformRawDocs: TransformRawDocs, | ||
readyToReindex: Defer<void>, | ||
doneReindexing: Defer<void> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same than previous comment.
@@ -52,6 +52,7 @@ export type { | |||
SavedObjectsExportablePredicate, | |||
} from './src/saved_objects_management'; | |||
export type { SavedObjectStatusMeta } from './src/saved_objects_status'; | |||
export { MAIN_SAVED_OBJECT_INDEX } from './src/saved_objects_index_pattern'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really need this to be exported from the public package? I'd rather keep this internal (and in one of our internal package), especially given we already have an API for core-externals to retrieve this info at runtime:
kibana/packages/core/saved-objects/core-saved-objects-server/src/contracts.ts
Lines 136 to 139 in 47d1a0e
/** | |
* Returns the default index used for saved objects. | |
*/ | |
getKibanaIndex: () => string; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The '.kibana'
string is widely hardcoded across the codebase, specially in tests.
I wanted to factorise this and centralise this knowledge somehow.
The end goal being discoverability and also having a centralised place to define the names of the SO indices.
If there's a test that manually deletes .kibana
index, I would like to replace that with a list of all SO indices.
This list could be exposed by core (see my draft PR) so that when we decide that we add a new SO index, the tests are transparently taking it into account.
That being said, perhaps it'd be better to expose a cleanup method somewhere (SavedObjectRepository?) and encourage users to use that instead. How could we prevent them from accessing .kibana index directly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After looking at the external usages of .kibana
(e.g in FTR / es-tools), I think this export is fine.
Ideally it would be exported from an utility package and not from the public type one, but this can be changed later as a follow-up
.../core/saved-objects/core-saved-objects-migration-server-internal/src/kibana_migrator.test.ts
Outdated
Show resolved
Hide resolved
💚 Build Succeeded
Metrics [docs]Public APIs missing comments
Public APIs missing exports
Unknown metric groupsAPI count
ESLint disabled line counts
Total ESLint disabled count
History
To update your PR or re-run it, just comment with: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should make sure getCurrentIndexTypesMap
is resillient to intermittent failures, otherwise really impressive work 👯
const readyToReindex = readyToReindexDefers[indexName]; | ||
const doneReindexing = doneReindexingDefers[indexName]; | ||
// check if this migrator's index is involved in some document redistribution | ||
const mustRelocateDocuments = !!readyToReindex; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I think it would be easier to follow / read this code if we did the following even if it's functionally equivalent
const mustRelocateDocuments = indicesWithMovingTypes.includes(indexName);
Otherwise you need to know how createMultiPromiseDefer
before you know what we're actually doing here, checking if this index has a moving type.
const meta = Object.values(mapping)?.[0]?.mappings._meta; | ||
return meta?.indexTypesMap ?? defaultIndexTypesMap; | ||
} catch (error) { | ||
if (error.meta?.statusCode === 404) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cloud proxy itself sometimes returns a 404 without even asking ES so it's dangerous to rely on 404 => index does not exist #135542
Actions in the migrator are automatically retried, but here if we get an intermittent failure the whole migration would fail. We should at least retry requests with something like
https://github.com/elastic/kibana/blob/dot-kibana-split/packages/core/elasticsearch/core-elasticsearch-server-internal/src/retry_call_cluster.ts#L61
@gsoldevila should this PR be closed given we now have #154888? |
## Description Fix #104081 This PR move some of the SO types from the `.kibana` index into the following ones: - `.kibana_alerting_cases` - `.kibana_analytics` - `.kibana_security_solution` - `.kibana_ingest` This split/reallocation will occur during the `8.8.0` Kibana upgrade (*meaning: from any version older than `8.8.0` to any version greater or equal to `8.8.0`*) **This PR main changes are:** - implement the changes required in the SO migration algorithm to support this reallocation - update the FTR tools (looking at you esArchiver) to support these new indices - update hardcoded references to `.kibana` and usage of the `core.savedObjects.getKibanaIndex()` to use new APIs to target the correct index/indices - update FTR datasets, tests and utility accordingly ## To reviewers **Overall estimated risk of regressions: low** But, still, please take the time to review changes in your code. The parts of the production code that were the most impacted are the telemetry collectors, as most of them were performing direct requests against the `.kibana` index, so we had to adapt them. Most other contributor-owned changes are in FTR tests and datasets. If you think a type is misplaced (either we missed some types that should be moved to a specific index, or some types were moved and shouldn't have been) please tell us, and we'll fix the reallocation either in this PR or in a follow-up. ## .Kibana split The following new indices are introduced by this PR, with the following SO types being moved to it. (any SO type not listed here will be staying in its current index) Note: The complete **_type => index_** breakdown is available in [this spreadsheet](https://docs.google.com/spreadsheets/d/1b_MG_E_aBksZ4Vkd9cVayij1oBpdhvH4XC8NVlChiio/edit#gid=145920788). #### `.kibana_alerting_cases` - action - action_task_params - alert - api_key_pending_invalidation - cases - cases-comments - cases-configure - cases-connector-mappings - cases-telemetry - cases-user-actions - connector_token - rules-settings - maintenance-window #### `.kibana_security_solution` - csp-rule-template - endpoint:user-artifact - endpoint:user-artifact-manifest - exception-list - exception-list-agnostic - osquery-manager-usage-metric - osquery-pack - osquery-pack-asset - osquery-saved-query - security-rule - security-solution-signals-migration - siem-detection-engine-rule-actions - siem-ui-timeline - siem-ui-timeline-note - siem-ui-timeline-pinned-event #### `.kibana_analytics` - canvas-element - canvas-workpad-template - canvas-workpad - dashboard - graph-workspace - index-pattern - kql-telemetry - lens - lens-ui-telemetry - map - search - search-session - search-telemetry - visualization #### `.kibana_ingest` - epm-packages - epm-packages-assets - fleet-fleet-server-host - fleet-message-signing-keys - fleet-preconfiguration-deletion-record - fleet-proxy - ingest_manager_settings - ingest-agent-policies - ingest-download-sources - ingest-outputs - ingest-package-policies ## Tasks / PRs ### Sub-PRs **Implementation** - 🟣 #154846 - 🟣 #154892 - 🟣 #154882 - 🟣 #154884 - 🟣 #155155 **Individual index split** - 🟣 #154897 - 🟣 #155129 - 🟣 #155140 - 🟣 #155130 ### Improvements / follow-ups - 👷🏼 Extract logic into [runV2Migration](#154151 (comment)) @gsoldevila - Make `getCurrentIndexTypesMap` resillient to intermittent failures #154151 (comment) - 🚧 Build a more structured [MigratorSynchronizer](#154151 (comment)) - 🟣 #155035 - 🟣 #155116 - 🟣 #155366 ## Reallocation tweaks Tweaks to the reallocation can be done after the initial merge, as long as it's done before the public release of 8.8 - `url` should get back to `.kibana` (see [comment](#154888 (comment))) ## Release Note For performance purposes, Kibana is now using more system indices to store its internal data. The following system indices will be created when upgrading to `8.8.0`: - `.kibana_alerting_cases` - `.kibana_analytics` - `.kibana_security_solution` - `.kibana_ingest` --------- Co-authored-by: pgayvallet <[email protected]> Co-authored-by: Christos Nasikas <[email protected]> Co-authored-by: kibanamachine <[email protected]> Co-authored-by: Georgii Gorbachev <[email protected]>
Replaced by #154846 |
…155693) Addresses the following feedback: #154151 (comment) Similar to what has been done for ZDT, the goal of this PR is to extract the logic of the `runV2Migration()` from the `KibanaMigrator` into a separate file. The PR also fixes some incomplete / incorrect UTs and adds a few missing ones.
…lastic#155693) Addresses the following feedback: elastic#154151 (comment) Similar to what has been done for ZDT, the goal of this PR is to extract the logic of the `runV2Migration()` from the `KibanaMigrator` into a separate file. The PR also fixes some incomplete / incorrect UTs and adds a few missing ones. (cherry picked from commit 06c337f) # Conflicts: # packages/core/saved-objects/core-saved-objects-migration-server-internal/src/kibana_migrator.test.ts # packages/core/saved-objects/core-saved-objects-migration-server-internal/src/kibana_migrator.ts # packages/core/saved-objects/core-saved-objects-migration-server-internal/src/run_resilient_migrator.ts # src/core/server/integration_tests/saved_objects/migrations/kibana_migrator_test_kit.ts
…nd UT (#155693) (#158953) # Backport This will backport the following commits from `main` to `8.8`: - [Refactor KibanaMigrator, improve readability, maintainability and UT (#155693)](#155693) <!--- Backport version: 8.9.7 --> ### Questions ? Please refer to the [Backport tool documentation](https://github.com/sqren/backport) <!--BACKPORT [{"author":{"name":"Gerard Soldevila","email":"[email protected]"},"sourceCommit":{"committedDate":"2023-06-01T12:47:40Z","message":"Refactor KibanaMigrator, improve readability, maintainability and UT (#155693)\n\nAddresses the following feedback:\r\nhttps://github.com//pull/154151#discussion_r1158470566\r\n\r\nSimilar to what has been done for ZDT, the goal of this PR is to extract\r\nthe logic of the `runV2Migration()` from the `KibanaMigrator` into a\r\nseparate file.\r\n\r\nThe PR also fixes some incomplete / incorrect UTs and adds a few missing\r\nones.","sha":"06c337f903a5b310f8a21a66065b186bbbc9642e","branchLabelMapping":{"^v8.9.0$":"main","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["Team:Core","enhancement","technical debt","release_note:skip","backport:skip","Feature:Migrations","Epic:KBNA-7838","v8.9.0"],"number":155693,"url":"https://github.com/elastic/kibana/pull/155693","mergeCommit":{"message":"Refactor KibanaMigrator, improve readability, maintainability and UT (#155693)\n\nAddresses the following feedback:\r\nhttps://github.com//pull/154151#discussion_r1158470566\r\n\r\nSimilar to what has been done for ZDT, the goal of this PR is to extract\r\nthe logic of the `runV2Migration()` from the `KibanaMigrator` into a\r\nseparate file.\r\n\r\nThe PR also fixes some incomplete / incorrect UTs and adds a few missing\r\nones.","sha":"06c337f903a5b310f8a21a66065b186bbbc9642e"}},"sourceBranch":"main","suggestedTargetBranches":[],"targetPullRequestStates":[{"branch":"main","label":"v8.9.0","labelRegex":"^v8.9.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/155693","number":155693,"mergeCommit":{"message":"Refactor KibanaMigrator, improve readability, maintainability and UT (#155693)\n\nAddresses the following feedback:\r\nhttps://github.com//pull/154151#discussion_r1158470566\r\n\r\nSimilar to what has been done for ZDT, the goal of this PR is to extract\r\nthe logic of the `runV2Migration()` from the `KibanaMigrator` into a\r\nseparate file.\r\n\r\nThe PR also fixes some incomplete / incorrect UTs and adds a few missing\r\nones.","sha":"06c337f903a5b310f8a21a66065b186bbbc9642e"}}]}] BACKPORT--> --------- Co-authored-by: kibanamachine <[email protected]>
Part of #104081
This PR updates the migrator logic so that it can dispatch SO documents to different system indices during a migration.
We are NOT splitting
.kibana
into separate system indices with this PR, only setting the grounds to be able to do so.Once this PR is merged, we can update calls to
typeRegistry.registerType()
to update certain types to use a different index, e.g.: