-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Elasticsearch.js client in the new platform #12442
Comments
Related to #18841 |
The first step toward solving this issue is in Alternative to request scoped services which has been merged. Updating the issue description to reflect these changes. |
Just brainstorming... Would it work to have the security plugin create its own esClient, given the discussion above? Does that even make sense? What does it need to add to the barebones elasticsearch-js client inside of Elasticsearch Service here? does monitoring plugin need a special esClient too? what does it need? Do other plugins declare security as a dependency, and when security is enabled, and do they need to use security's esClient? Could this happen? // init function for plugin
plugin: (kibana, deps) => {
const { security } = deps;
...
registerEndpoints(security, ...);
}
// when handling requests
function registerEndpoints(security, ...) {
...
async handler (req, res) {
// maybe it also needs its own client?
const client = await elasticsearch.getScopedDataClient(creds);
// maybe it needs the security client for some things?
const items1 = await security.client.find({...});
const items2 = await security.client.search({...});
return { items1, items2 };
}
} |
Security should impact the clients that are provided through the es service without any knowledge from the plugins that use those clients. That could mean the security plugin interacts with extension points from the es service that change client behaviors, or could it mean the security plugin interacts with an extension point on the es service that outright replaces the clients in the es service, but it wouldn't want to register a new and special "security" client. |
Yup, I'm on board now. I have a new proposal coming... |
I'm going to close this issue out now that #28344 is finished and the new elasticsearch-js library is published. I think these two things together will address most of what this issue describes, and we can create specific issues for anything else. |
Our current
callWithRequest
andcallWithInternalUser
have a couple problems, mainly:we either have to inject the
request
all over the place or we create lots of these helpersthat we then pass into services.
we end up attaching "request-scoped" services directly on the
request
instead of the approach above. However, this doesn't scale nicely with our plugin system, as we want plugins to only receive values from direct dependencies, not transitive dependencies. Also, the only reason we're currently doing this is because we want to scope the esclient to a request.it's not possible to type them because they rely too much on strings and js dynamicness
As part of the new platform work I think we should find a better solution for this pattern.
So, what we want is basically: The ability to have a
cluster
object that's "preset" with a specific http request (or none, in the case ofcallWithInternalUser
), and I want to be able to type the requests we do to Elasticsearch.As of now, to perform the request we only need:
elasticsearch.requestHeadersWhitelist
, which defaults to[ 'authorization' ]
Space
To still avoid passing around the request object, we can expose an API that constructs a permissions-defining object containing all the things from request that we need (that Security needs, for example).
Implementation
In #14980 we implemented a way to avoid passing the request object, by providing functions in the Elasticsearch Service that can create scoped and unscoped
DataClient
andAdminClient
.For example, in the request handler method of a plugin that depends on the
ElasticsearchService
, you define a client:This is a DataClient which accesses the
data
cluster of Elasticsearch, but it is scoped to the user from the request header. This means every request that is handled by this request handler method will instantiate a newDataClient
scoped to that request. But that's not all, keep reading: internally to the constructor that creates theDataClient
object, it actually uses the same, singleclients.data
Observable created withinElasticsearchService
when the service is created. That means we always only ever have at most two clients: an admin client, and a data client. But theAdminClient
andDataClient
objects exposed by the Elasticsearch Service are offered as ways to wrap those static clients with request-related information.The PR has some examples of other clients we might need and where they're used.
Future
We are already aware of a future where we need the current Space from the request url to scope and restrict access to Kibana Saved Objects. This means that request handlers need more than just the request.headers for creating scoped ES clients. Something like:
might suffice, and internally to the DataClient:
.. something to make searches automatically filtered by
Space
as well as user.Maybe in the request validation, we can have the
Router
construct a permissions object like:that would be automatically available on every request object passed to a handler:
Security
Whether or not X-Pack Security is enabled, we always will need to create scoped and unscoped admin and data clients, because of internal Kibana operations outside of security that need to do things. For example, as long as we have the health check, it uses the internal user against the data cluster, so it needs an unscoped DataClient.
The Security plugin will create its own version of the Elasticsearch client, then use that for searches, something like:
I think most likely, it needs a way to create a client that hits the admin cluster with the current user, and possibly space if Spaces is enabled. A helper like
createClient
exposed on Elasticsearch Service might help with that.First attempt (outdated)
This is one exploration I did that enables types (using the elasticsearch.js types from npm), but which still requires the http requests to be passed around.
Here you can see the types on the response:
However, because we can't "preset"
headers
on theclient
we need to handle them ourselves, which makes for a really bad api. Also, this fully exposes theclient
, which I think we should try to avoid.One idea we've discussed is to build a Kibana specific "wrapper" around
client
that can be typed and that can be preset with an http request (or just the specific headers). We don't have to "copy" the entireclient
api, just the requests we actually need as we go. There are pros and cons to this, though.One con is what happens when someone needs access to an api that we haven't exposed yet, but I think we can solve this by giving access to a fully untyped api similar to what we have today.
Maybe getting a cluster could be something like this:
or maybe something like:
Or something entirely different.
Then we could add
search
and other apis directly on thiscluster
object, e.g.which would call through the underlying
esClient
.It could also be worth exploring ways of receiving this cluster directly in our http handlers in some way.
The text was updated successfully, but these errors were encountered: