-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Removing legacy multitenancy #82020
Comments
@restrry yup, good call. Will do so now. I'll be creating GitHub issues for the other known situations where import/export doesn't work and linking them to the problems they need solved to be able to use it. |
@kobelb I don't see an alternative that we can migrate to when tldr: Developers running local Kibana instances and connecting to a shared Elasticsearch cluster. Elastic Cloud doesn't support CCS/CCR with clusters outside Elastic Cloud (eg. local clusters). And spaces doesn't seem applicable in this context (correct me if I'm wrong). |
@sqren The "kbn es support for CCS/CCR" task above, with no details, was meant to address your alls use-case. I'll flesh this out in more detail here shortly. |
I want to point here that our test clusters have a large amount of data (3-4TB), and up to 80 developers can work at the same time in the same cluster (on different local Kibana instances pointing to a common remote Elasticsearch) |
FYI,
have been addressed recently. Also |
|
@kobelb Do we need to somehow actively notify the impacted type owners that they can now start working on 'enabling' import/export for their types? |
We've internally chased some of the options to at least make it easier to have a local Elasticsearch instance take ownership of CCS but that sadly looks like it will be more involved, still hard to setup and keep up to date with cycling server CA certificates to be a viable alternative. I am very conflicted about this one:
At the same time:
This does sound a best of both worlds approach, single kibana index but doubling down on spaces to provide segmentation. E.g defining a |
It's not feasible to use Spaces for developer segmentation. There are a number of subsystems within Kibana that aren't segmented by Space and it would lead to conflicts for developers. For example, if developer A was to add a new saved-object type and developer B doesn't have this saved-object type, this would break developer B's Kibana from starting up. All default tenants of Kibana that share an Elasticsearch cluster must be the same version and have the same plugins installed; otherwise, things just don't work properly. |
What's the expected behaviour if v8 spins up with |
Starting in 8.0, users will be unable to specify the |
Hi @kobelb (and the rest of the thread) 👋 It appears that #108111 went in which I suspect was related to the work outlined in #101964. As I mention in the PR this has broken the Observability Test Clusters which in turn as disrupted the development workflow for a number of folks on the Observability team. As has been discussed before, we did know this change was coming but had hoped it would land more toward the October time-frame instead of early August. Sadly, our planned migration work has run into some roadblocks which were are attempting to workaround but that work has not yet been completed. As mentioned in the PR, I'd like to ask the Kibana folks for a temporary revert of #108111 until such time that we can unblock our migration work. I suspect this can be done by mid to late September if not well before, but we're just not in a position where we can deploy our desired workaround as of today. The mid-September timeline was recently communicated by our team in an email exchange between @kuisathaverat and @alexh97 on the Kibana team where we responded to the inquiry . (Subject: We're going to immediately search for additional workaround on our end in the event that Kibana isn't able to revert this PR temporarily, but I just wanted to raise the issue in this thread as well for added visibility. Thanks in advance and apologies for any lack of communication on our end that may have lead to this. :) cc: @weltenwort |
Sorry about that @cachedout, #108111 has been reverted so you all can continue to proceed to use those settings for the time being. |
@cachedout Is there a ticket we can use to follow and know when we can remerge this PR? Nevermind, found it! https://github.com/elastic/observability-test-environments/issues/915 |
The reason we need to stand up multiple instances of Kibana attached to the same Elasticsearch cluster is to support different localizations. With this ticket that strategy will no longer work. Will spaces be able to have different locale settings? |
Hey @dbuijs, you can have multiple Kibana nodes/processes with different |
We were concerned that different Kibana nodes that shared the same kibana.index would overwrite each other because this happened with earlier versions of elasticsearch. Note that we need to make changes in index patterns and runtime fields in the different locales. Will different Kibana nodes sharing a kibana.index be able to maintain separate index patterns and display settings for the same elasticsearch indexes? |
They will not. Our recommendation would be to use spaces to segment your index patterns. Using Kibana's RBAC model you can grant a subset of your users access to different spaces. |
I can't do that unless I can have different locale settings for different spaces on the same Kibana instance. Has this been considered by the Kibana team? Would it be helpful for me to create a new issue for this? |
Based on this quote, there is an issue to raise for Reporting:
Reports are Kibana entities that can't be imported and exported using saved object management. There is no path to transitioning to reports to use Spaces, either. It should be understood the only way a user will be able to view historical reports that are in a custom index in 7.x is to download them to another form of storage before upgrading to 8.0. @kobelb is this acceptable? cc @elastic/kibana-reporting-services |
IMO, yes because reports can be easily regenerated if they are needed. However, I delegate my real opinion to @alexfrancoeur. |
I found a related issue on the text of the deprecation message: #114217 |
I'd like to hear @sixstringcode's thoughts as well, but this sounds acceptable to me. One thought, outside of downloading them, would be to re-index to a "historical reports" index. Should we / could we make this an optional task that an administrator is asked about during the upgrade and / or part of the upgrade assistant? |
We could allow them to reindex their reports into a historical reports index; however, it's going to be rather difficult for users to consume this index as the csv/pdf/pngs are base64 encoded binary, and reports are currently per-user specific. @tsullivan - Couldn't we just allow users to reindex their existing custom reporting indices |
That would be a manual step performed by an administrator, right? |
Correct. |
Summary
Users have historically been able to change the
kibana.index
setting in theirkibana.yml
to implement what will henceforth be referred to as "legacy multitenancy". This allowed users to have multiple different instances of Kibana using the same Elasticsearch cluster, but with isolation between all of the data that is stored in thekibana.index
. This approach to multitenancy has been generally fraught with problems and has introduced considerable complexity to Kibana. With the implementation of Spaces, we no longer need to rely on the legacy method of multitenancy, as we have a first-class method of implementing multitenancy.As such, starting in 8.0, we will be removing the ability to configure the following settings that were used to implement legacy multitenancy:
kibana.index
xpack.reporting.index
xpack.task_manager.index
During 7.x, these settings will be deprecated, and users will be warned that they won't be able to configure them any longer starting in 8.0. Users will be encouraged to migrate to Spaces or use CCS/CCS with separate Elasticsearch clusters. As part of this effort, we will be ensuring that users have a clear path to migrate from a legacy multitenant instance to Spaces.
Alternatives to legacy multitenancy
Spaces
Spaces allow users to segment their saved-objects and grant users access to different Spaces. One of the common uses of Spaces is to implement multitenancy, where multiple groups of users are able to share an instance of Kibana with isolation between the groups. When Spaces was first implemented, users were encouraged to use saved-object import/export to move their saved-objects from a tenant to Spaces. A number of users have successfully completed the migration from legacy multitenancy to Spaces, and we've seen the adoption of legacy multitenancy decline since the implementation of Spaces.
Migrating to Spaces
Using saved-object management, a user is able to export all of the saved-objects from a legacy multitenant instance to a Space in the default tenant. However, there are currently some Kibana entities that can't be exported and imported using saved-object management. If we're going to no longer allow users to utilize legacy multitenancy, we should provide them a method of transitioning to Spaces, and as such, we'll need to ensure that users have a clear migration path.
Common issues with saved-object import/export integration
CCR/CCS
If our users need true isolation of Kibana instances and their system-indices but want to use a shared data-set, they should use either cross-cluster replication or cross-cluster search. This solution should primarily be considered when the isolation that Spaces provides is determined to be insufficient, as using Spaces is much easier to configure and a less resource-intensive solution.
Tasks
kibana.index
setting Deprecatekibana.index
setting #82521xpack.reporting.index
setting Deprecatexpack.reporting.index
setting #82522xpack.task_manager.index
setting Deprecatexpack.task_manager.index
setting #82524kibana.index
,xpack.reporting.index
,xpack.task_manager.index
settings #101964Original discussion: #60053
The text was updated successfully, but these errors were encountered: