Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fleet] Multiple output UI #117317

Closed
2 of 3 tasks
nchaulet opened this issue Nov 3, 2021 · 5 comments
Closed
2 of 3 tasks

[Fleet] Multiple output UI #117317

nchaulet opened this issue Nov 3, 2021 · 5 comments
Assignees
Labels
Team:Fleet Team label for Observability Data Collection Fleet team v8.0.0

Comments

@nchaulet
Copy link
Member

nchaulet commented Nov 3, 2021

Description

Implementation task for the multiple output UI.

Screen Shot 2021-11-08 at 2 20 29 PM

Taks

  • Move the settings and outputs UI outside to their own tab
  • Crud operation for the outputs
  • Validate internal ES cluster or display a warning

TODO Add more description here,

@nchaulet nchaulet added v8.0.0 Team:Fleet Team label for Observability Data Collection Fleet team labels Nov 3, 2021
@nchaulet nchaulet self-assigned this Nov 3, 2021
@elasticmachine
Copy link
Contributor

Pinging @elastic/fleet (Team:Fleet)

@joshdover
Copy link
Contributor

One thought I had: since we'll now be exposing additional Elasticsearch outputs (for the same cluster) to the user, should we be trying to validate that the Elasticsearch host is for the same cluster? It's tricky because we may not always have network access, but I think we could do a best effort to at least validate that the kibana_system user's credentials work on the cluster and the same cluster UUID is returned. If we can't access we could at least show a warning.

@nchaulet
Copy link
Member Author

nchaulet commented Nov 5, 2021

One thought I had: since we'll now be exposing additional Elasticsearch outputs (for the same cluster) to the user, should we be trying to validate that the Elasticsearch host is for the same cluster? It's tricky because we may not always have network access, but I think we could do a best effort to at least validate that the kibana_system user's credentials work on the cluster and the same cluster UUID is returned. If we can't access we could at least show a warning.

I think it's always a good idea to validate user input. What do you have in mind validating before creating or updating and show a toast warning if Kibana was not able to reach ES or the cluster UUID is different?

We could try to reach the output, but I guess there will a be a few time where we cannot reach ES. Also which credentials should we use to call ES? the user one? Is there any security issues here?

@joshdover
Copy link
Contributor

What do you have in mind validating before creating or updating and show a toast warning if Kibana was not able to reach ES or the cluster UUID is different? We could try to reach the output, but I guess there will a be a few time where we cannot reach ES.

I think we can follow logic like:

  • If we can reach ES, but get a 401 Unauthorized: don't allow them to add the output
  • If we can reach ES but the cluster UUID is different: don't allow them to add the output
  • If we can't reach ES, show a warning that we couldn't validate access (with the underlying error message) and allow them add the output anyways if they choose

In terms of where in the flow to put this, I think we should work with @dborodyansky to integrate this in to the current designs.

Also which credentials should we use to call ES? the user one? Is there any security issues here?

It's a good point, because we'd be sending credentials to a potentially untrusted source. We need to be careful to make sure that a user couldn't use this to retrieve the kibana_system credentials. So I think we have to use the current user's credentials, which may or may not have access to the GET / API (I can't find the docs right now on what privilege is required, if any). We should make sure there is some API that we can be sure all users have access to before worrying about design.

@nchaulet
Copy link
Member Author

Resolved by #118910

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Team:Fleet Team label for Observability Data Collection Fleet team v8.0.0
Projects
None yet
Development

No branches or pull requests

3 participants