Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fleet] Add support for input type packages #133296

Closed
12 tasks done
Tracked by #319
joshdover opened this issue Jun 1, 2022 · 16 comments
Closed
12 tasks done
Tracked by #319

[Fleet] Add support for input type packages #133296

joshdover opened this issue Jun 1, 2022 · 16 comments
Labels
Meta Team:Fleet Team label for Observability Data Collection Fleet team

Comments

@joshdover
Copy link
Contributor

joshdover commented Jun 1, 2022

Meta issue: elastic/package-spec#319
Spec changes: elastic/package-spec#328

We're adding support for a new type of package, called an input package, which is distinct from an integration package. Packages will have a new type field in their manifest.yml which indicates if they're an integration or input package.

There are several changes required in the Integration UX and package installation code to support this. To this end, we'll aim to take a phased approach here. We should first focus on unblocking integration developers from creating new input type packages and migrating existing integration type packages to input type packages. Then, we should focus on ensuring the "upgrade" path for migrated integration type packages is stable. Finally, we'll work on UX changes and enhancement within the policy editor and elsewhere in the Fleet/Integrations UI to improve the experience around input type packages.

🗺️ Phases

✅ Phase 1 - Support new input packages (Done)

Child issue: #137750

  • A user can create a new integration policy for a package with type: input
    • Any input package should include the data_stream.dataset variable definition in its package spec in order to facilitate ingest customization
  • Documentation exists for the manual process of creating an index template and an ingest pipeline to customize the ingest process for the input package
    • See comment below for a detailed walkthrough of the manual process as it stands today

✅ Phase 2 - Support upgrades from integration packages to input packages (Done)

Child issue: #137751

  • A user can install the latest version of a package which includes a change from type: integration to type: input
  • A user can upgrade their existing policy for the package and maintain their variable definitions, ingest customizations, etc
    • It's likely that the data_stream.dataset variable will be added to these packages in these updated versions. If a user changes the dataset using this variable, they'll need to follow the same manual process mentioned above to customize ingest. We don't need to do any kind of intelligent "migration" of ingest customizations during upgrades at this time.

🎨 Phase 3- UI/UX Enhancements in support of input packages (Unstarted)

Policy editor UI

Child issue: #145903

  • For integrations with type: input, display the dataset form field for the data_stream.dataset variable as a "top-level" field
  • dataset input is pre-populated with all existing datasets. Each dropdown option is of the format <type>-<dataset>, default dropdown value is generic
    • We should be able to reuse the API that powers Fleet's Data streams tab - specifically the dataset column of the listing table on this tab.
    • Any dropdown values that start with the same type as the selected integration appear at the top of the list
      • e.g. If I'm adding a policy for the log integration, options that begin with log-* should appear first
  • The "index templates" and "ingest pipelines" customization UI is not displayed for input packages
Figma Screenshots image image

Ingest customization components

Child issue: #145529

  • Create ingest customization resources when the input package integration policy is saved
    • Index template with named {type}-{dataset} that matches on an index pattern of {type}-{dataset}-*`
    • Ingest pipeline with name {type}-{dataset}-{package-version}
      • ❓ Should this also include the package name since its otherwise unclear where the "version" value is coming from? Could this simply move into a _meta value?

Future proofing

  • Ensure that the output_permissions list in the agent policy sent to Fleet Server includes the configured dataset
  • For integrations with type: integration, set the data_stream.dataset field value via a hidden input
    • Dataset values for these integrations don't need to be configurable in the policy editor right now, but we should probably make sure we set this the same way for all package types to better facilitate future enhancements and migrations

❓ Open Questions

  • What happens if the user creates a custom dataset inline in the policy editor that conflicts with a package that is installed later?
  • Where should the creation of new index templates happen? Can we do this in the existing "save policy" APIs, or should we consider starting the process of a customization API that allows us to add new datasets to installed integrations?
  • Are there performance concerns around fetching all existing dataset values for the dropdown? Should this be a paginated request to fetch in batches?
@joshdover joshdover added the Team:Fleet Team label for Observability Data Collection Fleet team label Jun 1, 2022
@elasticmachine
Copy link
Contributor

Pinging @elastic/fleet (Team:Fleet)

@kpollich
Copy link
Member

Contextualizing the open questions we have so far:

What happens if the user creates a custom dataset inline in the policy editor that conflicts with a package that is installed later?

Consider the following scenario:

  1. User creates an integration policy for the custom logs integration to ingest nginx access logs, specifies dataset value nginx.access
  2. Fleet creates an index template with name logs-nginx.access
  3. Custom logs integration ingests data to a logs-nginx.access-default data stream based on the provided dataset value and namespace in the integration policy
  4. Later, the customer adds another integration policy, this time for the nginx integration which defines the logs-nginx.access-default data stream again

Once we arrive at step number 4, we'll have a conflict when we install the nginx package because we'll attempt to create an index template with name logs-nginx.access despite one already existing. The index template from the nginx package will also contain mappings defined by the nginx package, whereas the custom logs package won't have those same mappings.

One approach to resolve this conflict might be to adjust the naming of the conflicting index template, data streams, etc generated from installing the nginx package. e.g. logs-nginx1.access.

Or, we could prefix these assets with the integration name when creating them for input packages, e.g. logs-log.nginx.access.

Where should the creation of new index templates happen? Can we do this in the existing "save policy" APIs, or should we consider starting the process of a #121118 that allows us to add new datasets to installed integrations?

We should be able to add some conditional logic to install index templates as part of the integration policy creation process, but it stands to reason that this logic might belong in the proposed "customization" API linked above. The thinking there is that we'll be exposing functionality like "add additional datasets to an integration" via this customization API eventually, so we could potentially start on this now.

@kpollich
Copy link
Member

I've been working on contextualizing how we might solve the problems today that we're proposing to solve with input packages, based on this ask from @joshdover:

@kpollich would you be able to evaluate the technical tradeoffs of moving forward with the existing functionality for new input packages? We should consider the upgrade path and what guidelines we could introduce to minimize upgrade issues. I want us to know if adding more input packages now with the existing format will create more work on the UI team to support smooth upgrades or if the fact that we already have to support upgrading the existing package policies makes this point moot. Any guidelines or guardrails that would minimize the upgrade complexity would also be helpful, if relevant.

So, I'll enumerate some of the goals of input package as I understand them:

  1. Provide an out-of-the-box experience for logs/metrics we don't have a specific integration for, e.g.
  2. Allow users to customize ingestion for input packages through custom ingest pipelines, component templates, etc
  3. Preserve ingest customization on stack upgrades and integration upgrades

As it stands today, Fleet and the existing integration type packages have some support that gets users close to these goals. I'll walk through a process below to set up customized ingest of custom logs and an ensuing upgrade of the custom logs integration.

Provide a custom dataset value for the custom logs integration policy

Fleet supports a data_stream.dataset field that allows integration developers to prompt a user for a dataset value via Fleet's policy editor, e.g.

image

Observe logs are written to the configured data stream, e.g. logs-custom.docker.default

image

Set up an ingest pipeline

If I want to customize the ingest process for this custom data stream, I need to set up an index template and ingest pipeline completely manually.

One of the goals we'd be out to reach with full support for input templates would be automating the creation of these resources for the user. e.g. when I create a custom logs policy for the input package version of the integration and I specify my custom dataset, we'd create an index template wired up to Fleet's existing global component templates, final pipeline, etc that you could immediately start customizing.

image

(It's confusing that I named this with pipeline that the end, but this is my index template)

Now, I'll set up an ingest pipeline with some dummy field processing, e.g.

image

image

Once I've got this index template created matching on my customized data stream and I've set my default_pipeline value to my own ingest pipeline, I'm basically set up with customized ingest for my logs.

image

Upgrading the custom logs integration

Let's say we push a new version of the custom logs integration and I want to upgrade my integration policies.

image

I added a tags field that doesn't really do anything just to illustrate the change in version:

image

Following the upgrade, my ingest pipeline is still working as expected, e.g.

image

However, I imagine that if the custom logs integration defined mappings of some kind and we made breaking changes to those mappings between versions, my custom ingest pipeline would probably break here. I don't know if that's an actual use case for input packages, though, because by their nature they don't make assumptions about data shape, mappings, etc. The idea is just to get from my infrastructure -> Elastic with as little friction as possible, then customize ingest on my own.

@joshdover let me know if I've missed the mark on what you were getting at with your questions around upgrades here. I wasn't quite confident on my understanding of the ask.

Ingest pipeline and Mappings UI

Something else worth calling out is that we're actively working on delivering #133740 in 8.4. The work here includes adding UI elements that link out to ingest pipelines and mappings for the current integration in Fleet's policy editor, e.g.

image

Note, however, that we don't list the ingest pipeline I created above in the case where the dataset has been customized. This is because Fleet isn't creating an index template for the configured dataset, only for the default one. @nchaulet maybe you'd have some input here for whether it'd be possible to create the index template + ingest pipeline objects when the dataset has been customized in cases like this. I think this is not very trivial because the index template + ingest pipeline are created when the package is installed, and we don't know what custom dataset value is at that time.

Recommendations

I think one helpful thing we could do here in terms of "guidelines and guardrails" as Josh called them would to be provide some kind of documentation around the process of introducing custom datasets, index templates, and ingest pipelines like I went through here.

We have this tutorial in the docs for applying a custom ILM policy to an integration's existing data stream, but we don't have any documentation on creating a custom data stream for things like custom logs.

So, my first recommendation would be to document a "customized ingest" process for custom application logs to address the need to reduce friction is critical. @ruflin called out in emails that this is likely our most common use case, yet the most difficult to get started with. Input packages help with this problem by automating the creation of an index template to get started, but we could bridge the gap until implementation by documenting the manual process better.

My second recommendation, then, would be to add the data_stream.dataset variable to more packages as a stop-gap between now and the implementation of input packages. Since Fleet already supports this value and it's clear that we can manually set up a custom ingestion process for Custom Logs, it would make sense to support the variable in other packages that are candidates for future migration to input packages.

It does seem like with proper documentation in a more "tutorial" style tone, we could solve some of the problems we're aiming to solve with input packages and potentially shift priorities if necessary. I'm not completely sold on my exploration of upgrades, though, and I'd appreciate some more input/direction there.

@ruflin
Copy link
Member

ruflin commented Jun 27, 2022

Few comments on my end:

  • Naming of templates and pipelines: Lets make sure whatever we document around input packages for pipeline names and templates is fully aligned with what we create with Fleet. For example I stumbled over names like custom-docker-logs-pipeline but not sure if you just used it to be explain what it does
  • Note, however, that we don't list the ingest pipeline I created above in the case where the dataset has been customized. This is because Fleet isn't creating an index template for the configured dataset, only for the default one.: If we provide this feature as is in input package, it will be more misleading than helping. Users will modify ingest pipeline and templates which then have not effect because the dataset was changed.
  • Upgrades: The upgrade issue I was thinking of is what if a package is an "integration package" in v1.2.0 and then an "input package" in v1.3.0. As we don't have the implementation yet, hard to tell how it will work but we must find a good way here.

@joshdover
Copy link
Contributor Author

@joshdover let me know if I've missed the mark on what you were getting at with your questions around upgrades here. I wasn't quite confident on my understanding of the ask.

  • Upgrades: The upgrade issue I was thinking of is what if a package is an "integration package" in v1.2.0 and then an "input package" in v1.3.0. As we don't have the implementation yet, hard to tell how it will work but we must find a good way here.

+1 the package policy is the upgrade path I am most concerned about. We need to understand the risk of continuing to add more packages of this kind in the integration-type packages format if we're going to need to support a smooth package policy upgrade path to a future input-type package format. We need to be sure we can migrate any input vars from an integration package to an input package policy.

If there are any patterns that we should avoid doing in these integration packages today to make that package policy upgrade to an input package smoother, it'd be good to know that before adding more "input" packages that are written in the integration-type format. @kpollich let me know if that's not clear.

@kpollich
Copy link
Member

kpollich commented Jun 27, 2022

Naming of templates and pipelines: Lets make sure whatever we document around input packages for pipeline names and templates is fully aligned with what we create with Fleet. For example I stumbled over names like custom-docker-logs-pipeline but not sure if you just used it to be explain what it does

Correct, I was just using this name to be demonstrative about what I was doing in the screenshots. If we provide first class documentation for this manual process, we should align the naming conventions with what Fleet generates, I agree.

Note, however, that we don't list the ingest pipeline I created above in the case where the dataset has been customized. This is because Fleet isn't creating an index template for the configured dataset, only for the default one.: If we provide this feature as is in input package, it will be more misleading than helping. Users will modify ingest pipeline and templates which then have not effect because the dataset was changed.

This makes sense. I'll make a note in the description to capture the need to hide the ingest pipelines and templates elements in the policy editor for input packages.

Upgrades: The upgrade issue I was thinking of is what if a package is an "integration package" in v1.2.0 and then an "input package" in v1.3.0. As we don't have the implementation yet, hard to tell how it will work but we must find a good way here.

+1 the package policy is the upgrade path I am most concerned about. We need to understand the risk of continuing to add more packages of this kind in the integration-type packages format if we're going to need to support a smooth package policy upgrade path to a future input-type package format. We need to be sure we can migrate any input vars from an integration package to an input package policy.

If there are any patterns that we should avoid doing in these integration packages today to make that package policy upgrade to an input package smoother, it'd be good to know that before adding more "input" packages that are written in the integration-type format.

Thanks for clarifying. I understand the need for definition here. It seems our goals with upgrades from non-input packages to input packages are as follows:

  1. Maintain variable definitions between package versions
  2. Ensure the package continues to write to its configured data stream

1 is not an issue as far as I can tell. The upgrade process between package types won't effect how Fleet handles variables during the upgrade, so I don't think we have any work to do here.

2 is a more interesting problem. Let's walk through the upgrade/migration process for the custom logs package:

  1. User creates custom logs integration policy for their custom Node app, sets data_stream.dataset to node.application, writes to data stream logs-node.application-default
  2. User manually creates index template + ingest pipeline to customize ingest process for their application logs as above
  3. New version of custom logs package is published wherein the package is migrated to type: input
  4. Because data_stream.dataset is already populated in their custom logs data policy, the Fleet policy editor continues to work as expected because we can populate the new dataset input with that value

Where I see potential for issues is migrating a package that doesn't already define data_stream.dataset in its variables. We'll need to make sure Fleet can resolve the current dataset the package is writing to when we display the policy editor in context of an upgrade. The logic for resolving this default value should be implemented as such:

render upgrade policy editor
  if data_stream.dataset is set
    dataset input defaults to value of data_stream.dataset
 else if data_stream.dataset is unset
    resolve dataset value based on package spec,  - "{package name}.{previous version data stream name}" - e.g. "log.log" for custom logs

We need to use the previous version of a package to determine what dataset it's currently writing to because input packages will no longer include data streams. There might be a way to resolve this from the existing policy object, as well.

To more specifically address @joshdover's concern here:

If there are any patterns that we should avoid doing in these integration packages today to make that package policy upgrade to an input package smoother, it'd be good to know that before adding more "input" packages that are written in the integration-type format.

One thing that will probably make the upgrade process smoother would be adding the data_stream.dataset variable like we do for custom logs to any new "input" packages we add before Fleet includes first-class support for type: input packages. We'll have to do less inference logic to resolve the dataset on upgrade this way.

I don't think I've come across anything in terms of patterns in the other direction though, i.e. those that would actively harm the upgrade process when migrating to input packages. Maybe adding additional data streams to a package that's a candidate for migration would cause additional complexity. I'll detail further down below in the questions section.

Questions

Something I'm not clear on is the number of inputs defined by an input package. Will there always be a single input for all input packages? e.g. consider the redis package we have today, which defines five data streams:

  1. info
  2. key
  3. keyspace
  4. log
  5. slowlog

Would there ever be a case where a package with multiple data streams like this is migrated to an input package? I don't think this would cause any problems, but we'd need to prompt for data_stream.dataset on each input block for these. The way I understand input packages this doesn't seem like a use case we'll have but I just want to check.

I would also like to know if there's a list of packages anywhere that are candidates for migration to input packages. Perusing the integrations repo I see some migration efforts planned but it's not clear to me what those actually are. There's an input label that I was hoping would be helpful, but it looks largely unused. Something like the logfile -> filestream migration list would be quite helpful in investigating any potential migration issues on Fleet's side of things.

@ruflin
Copy link
Member

ruflin commented Jun 28, 2022

This makes sense. I'll make a note in the description to capture the need to hide the ingest pipelines and templates elements in the policy editor for input packages.

Even though that fixes the misleading part and is likely a quick fix, as soon as we have input packages it is at the core to provide this properly.

Something I'm not clear on is the number of inputs defined by an input package

An input package can only define a single input. @mtojek I assume this is also represented in the package-spec?

Here you can find a list of inputs: https://www.elastic.co/guide/en/beats/filebeat/current/configuration-filebeat-options.html It is the filebeat inputs but if any of these names show up in a package, these should be input packages. Keep in mind, that we blocked so far of creating more "input packages" because the concept didn't exist. For your redis example, redis is likely a bad example that we should not migrate. All this redis input does is gathering the slowlog from redis.

@mtojek
Copy link
Contributor

mtojek commented Jun 28, 2022

An input package can only define a single input. @mtojek I assume this is also represented in the package-spec?

Yes, it's in spec. As policy_templates is an array, theoretically you can define many templates. In terms of integrations Fleet used to consider only the first item in the array.

@joshdover
Copy link
Contributor Author

I would also like to know if there's a list of packages anywhere that are candidates for migration to input packages.

I sent this to you via DM as it's currently private.

One thing that will probably make the upgrade process smoother would be adding the data_stream.dataset variable like we do for custom logs to any new "input" packages we add before Fleet includes first-class support for type: input packages. We'll have to do less inference logic to resolve the dataset on upgrade this way.

Thanks for your investigation here. AFAIK all of the existing "input" packages use this already, so I think we should be ok here.

Note, however, that we don't list the ingest pipeline I created above in the case where the dataset has been customized. This is because Fleet isn't creating an index template for the configured dataset, only for the default one.: If we provide this feature as is in input package, it will be more misleading than helping. Users will modify ingest pipeline and templates which then have not effect because the dataset was changed.

This makes sense. I'll make a note in the description to capture the need to hide the ingest pipelines and templates elements in the policy editor for input packages.

Agreed that custom ingest pipelines and index templates are central to this feature, so we do need to provide ways for users to create these from the UI, just like we do for integrations. I think we need to define a few things here to make this feature successful in the context of input packages:

  • When should Fleet create these templates for the custom dataset name? How would we provide links to the custom ingest pipeline and component template editors if they aren't created yet?
  • What should happen if the user changes the dataset of an existing policy that is already associated with a custom pipeline and/or index template? This is very similar to one of the open questions remaining in the namespace-specific template feature. I think we likely need dedicated UX to help the user make a decision: copy customizations to new templates, move customizations, merge customizations (though, I think we'd want to avoid merging for now)

cc @akshay-saraswat - we need to include a solution to this in the UX mocks you're working on. It would be good to sync up with @kpollich and discuss options before getting too far.

@kpollich
Copy link
Member

When should Fleet create these templates for the custom dataset name?
How would we provide links to the custom ingest pipeline and component template editors if they aren't created yet?

I think this will be similar to what we're doing with #133740.

For integration packages, we create the ingest pipeline, component templates, and index template at the time of package installation. Most often, this occurs when the first integration policy for that package is created. So, in #133740, we don't display the "links" UI to view/edit your index template or ingest pipeline until after you've created your first policy.

I think a similar workflow can apply to input packages, but we can only install the ingest pipeline, component templates, and index template at the time of policy creation, because we need to know the configured dataset value to create these objects for the user.

So, in the create policy context where we're creating a brand new policy, we won't show the "links" UI, and we'll create the templates + pipeline. Then, after saving the policy, Fleet will create these resources. We should be able to link them to the given policy through _meta fields, or by enforcing a naming convention. In #133740 we use a naming convention: https://github.com/elastic/kibana/blob/main/x-pack/plugins/fleet/common/services/datastream_es_name.ts to look up the associated templates and pipeline.

What should happen if the user changes the dataset of an existing policy that is already associated with a custom pipeline and/or index template? This is very similar to one of the #121118 remaining in the namespace-specific template feature. I think we likely need dedicated UX to help the user make a decision: copy customizations to new templates, move customizations, merge customizations (though, I think we'd want to avoid merging for now)

This is a pretty involved problem to solve and will likely require substantial UX investment as you've mentioned. Prompting the user for how we should handle dataset change feels like the right starting point for me, but we'll need UX direction on how to present the options/what the options are, and implementation work to actually support each option. I don't think we have any existing support or similar logic for the copy/move/merge actions we're proposing here.

@kpollich kpollich removed their assignment Jul 8, 2022
@kpollich
Copy link
Member

kpollich commented Jul 8, 2022

Unassigning myself since I think technical definition is complete here to the point that we could start on phases 1 and 2 in an upcoming iteration. Phase 3 will likely need additional definition and design resources, and should be prioritized as a later scope of work in a future iteration.

@jlind23 jlind23 added the Meta label Nov 23, 2022
juliaElastic added a commit that referenced this issue Dec 6, 2022
## Summary

Closes #145903

Added datasets combo box to input type packages `Dataset name` variable
with the option of creating a new one.

Using the existing `/data_streams` API to show the list of all datasets.

Package policy create/edit API already supports setting the value of
`data_stream.dataset` (input packages should have this variable as
described in #133296)

To verify:
- Start local EPR with input type packages as described
[here](#140035 (comment))
- Add `Custom Logs` integration
- Verify that Dataset name displays a dropdown and the selected value is
persisted.
- Verify that a new value can be entered in Dataset name
- Verify that Edit integration policy displays the existing value and
allows selecting/creating another one.

<img width="924" alt="image"
src="https://user-images.githubusercontent.com/90178898/205680787-3ef7da08-f5f0-4f05-b8d7-3a1c0a6a3d56.png">
<img width="1008" alt="image"
src="https://user-images.githubusercontent.com/90178898/205679497-935fe450-ce78-4f0b-943e-58e7f851f44b.png">
<img width="1006" alt="image"
src="https://user-images.githubusercontent.com/90178898/205679589-fedbbe0e-2c4d-4c00-986f-34ec5c2eb2f6.png">

Added ordering of datasets to move up those that start with the package
name e.g. `system*` datasets come first if adding a `system`
integration. Other than that ordering datasets alphabetically.

<img width="482" alt="image"
src="https://user-images.githubusercontent.com/90178898/205924837-a9807c92-2fe4-431a-88c6-f161d00812fb.png">

The rest of the requirements seem to be already implemented, see
[comments](#145903 (comment))

### Checklist

- [x] Any text added follows [EUI's writing
guidelines](https://elastic.github.io/eui/#/guidelines/writing), uses
sentence case text and includes [i18n
support](https://github.com/elastic/kibana/blob/main/packages/kbn-i18n/README.md)
- [x] [Unit or functional
tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)
were updated or added to match the most common scenarios
@hop-dev
Copy link
Contributor

hop-dev commented Jan 24, 2023

@kpollich @jen-huang I think the aspiration is that input packages will become GA in 8.7, I think we should discuss what is required for us to get there (I think we are really close). I think it should be a conscious decision on our part to "release" input package support and make sure we are happy with the feature and I want to kick off that conversation.

Here are 2 issues which are currently top of my mind for input packages at the moment and I think should be considered before we go GA:

I would like to see a "real" input integration working too as all work has been done using test packages, I know that @ishleenk17 is working on the Jolokia Input Integration, there are 3 beta input packages on the EPR at the minute: https://epr.elastic.co/search?prerelease=true&type=input

We will also want to make sure we are happy with the upgrade path from integration package to input package (which #149423 related to).

@ishleenk17
Copy link

@hop-dev: There are 2 input packages available in the technical preview currently.
Jolokia Input Package and Statsd Input Package.
This meta issue for input package will give you all the details.

I suppose one of these can be picked up to do the testing.
cc: @rameshelastic @lalit-satapathy @ruflin

@ruflin
Copy link
Member

ruflin commented Jan 25, 2023

The main question from my side around going GA is, do we expect any future breaking changes or only addition of features? As mentioned b @hop-dev , the upgrade path is important as we built too many non-input packages.

@ishleenk17 Lets make sure we continue the detailed testing on input packages to uncover any potential issues. Which one we pick, no strong opinion on my ned. But lets pick one with decent compexity.

@jlind23
Copy link
Contributor

jlind23 commented May 5, 2023

@kpollich @joshdover are we ok to close this as done?

@kpollich
Copy link
Member

kpollich commented May 5, 2023

Yes. Closing.

@kpollich kpollich closed this as completed May 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Meta Team:Fleet Team label for Observability Data Collection Fleet team
Projects
None yet
Development

No branches or pull requests

8 participants