-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Preconfiguration API behavior for missing package policies #113921
Comments
Pinging @elastic/fleet (Feature:Fleet) |
@nchaulet this seems like something we should be able to handle. We shouldn't have any of the The challenge is deciding when to execute this retry, which will be something that should be tackled as part of #111859 |
Yes this something we can achieve, I have been digging a little more in our code and there probably a few things that can lead to error in the way the preconfigure service:
|
Seems like we need to audit the existing behavior of preconfiguration, including how it handles failures and kibana.yml config changes, so we can determine a consistent model that should meet all of these types of requirements. I suspect this might be better than patching one thing at a time? |
As discussed with @nchaulet async, this is important for the cloud setup for 8.0 for the apm-server. The cloud setup is in |
Test instructions:
xpack.fleet.packages:
- name: fleet-server
version: latest
- name: apm
version: latest
xpack.fleet.agentPolicies:
# Cloud Agent policy
- name: Elastic Cloud agent policy
id: policy-elastic-cloud
description: Default agent policy for agents hosted on Elastic Cloud
is_default: false
is_managed: true
is_default_fleet_server: true
namespace: default
monitoring_enabled: []
package_policies:
- name: Fleet Server
package:
name: fleet_server
inputs:
- type: fleet-server
keep_enabled: true
vars:
- name: host
value: 0.0.0.0
- name: port
value: 8220
xpack.fleet.packages:
- name: fleet-server
version: latest
- name: apm
version: latest
xpack.fleet.agentPolicies:
# Cloud Agent policy
- name: Elastic Cloud agent policy
id: policy-elastic-cloud
description: Default agent policy for agents hosted on Elastic Cloud
is_default: false
is_managed: true
is_default_fleet_server: true
namespace: default
monitoring_enabled: []
package_policies:
- name: apm-cloud-123
package:
name: apm
- name: Fleet Server
package:
name: fleet_server
inputs:
- type: fleet-server
keep_enabled: true
vars:
- name: host
value: 0.0.0.0
- name: port
value: 8220
|
Hi @joshdover
Could you please confirm if we need to create any policy(manually) for this? Build details: Please let us know if we are missing anything here. |
@amolnater-qasource Apologies, I've updated the test instructions above to include the |
Hi @joshdover Build details: We observed that we are still getting some errors, due to which we are unable to access Fleet tab. We tried to troubleshoot, however didn't get any success. Thanks! |
@amolnater-qasource Good catch, these need to be updated for the removal of default packages. I've updated the configs above again to include the fleet_server package. |
Hi @joshdover Steps followed:
Build details:
Hence marking this as QA: Validated. |
Using the preconfiguration API to set up agent policies results in an empty agent policy, if the package registry cannot be reached. If it eventually is reachable, the agent policy is not updated, also not on Kibana startup.
This potentially leaves agent policies in a half-baked state, that is not automatically recoverable.
There's probably a bunch of questions to answer if policies from the preconfig API need to update existing agent policies, but can we discuss comparing preconfigured agent policies with installed ones and add missing package policies (not updating existing ones if they differ)?
cc @jen-huang @joshdover @ruflin
The text was updated successfully, but these errors were encountered: