-
Notifications
You must be signed in to change notification settings - Fork 148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to disable Kubernetes Watchers and Leader Election for Fleet Managed Agents #5558
Comments
so I was able to reproduce the issue of the configmap on my end as well.
Thus to support this feature we should:
I am gonna follow also about the list Nodes permissions 🙂 |
Ah my apologies on the 0640. Does fsGroup not mount it as 0:1000 instead of 0:0 so that I would have access to it? I appear to be able to read it since it is disabling the leader election after the 1st restart. In any regards I figure the solutions is to find a way to add configs that can be passed down to an agent from fleet or merge the configmap with what fleet provides. |
#4166 a change was made that if the new config contains the content of the default fleet config we won't do the replacement by rotation to help. This is not really obvious at all unless you know this PR exists. The default fleet config only contains
|
I added a volume mount to set the /etc/elastic-agent folder to be owned by 0:1000 so that I would be able to write to it and I can confirm the device is busy is the result. I also udpated the mount for the configmap to be 0660 and confirmed elastic-agent is the group so it has read and write. |
oh I see @cmacknz it is the other way around from what I understood the diff to be! I validated that by adding |
Yeah I can share the error. I read through the code and then update my config to be:
instead of
and that made it skip the replace. Both are valid yamls but would cause the diff to to not work. |
Here is the error on 8.15.0:
and here is the configmap:
When I did:
It did not throw the error |
I can't currently test 8.16.0 as my Elastic Cloud cluster is 8.15.1 which is latest available. |
hmmm yep I think that by default gopkg.in/yaml.v3 (package used to calculate the diff) does not split keys by dots when unmarshalling YAML into a map[string]interface{}. So it makes sense
No need to since the solution to this issue is the one you already figured out 🙂 |
I still unfortunately haven't been able to get the node watcher turned off either. Tried a bunch of different settings to disable it. |
@btrieger after looking at the code I found a really puzzling (to me) piece of validation code here which seems to cause the creation of watchers for pods and nodes always by default. Now I understand that this might be indeed the desired default behaviour but there should be a top level key Now @btrieger since you mentioned about nodes permissions only and not pod ones 😄 the following config will help get the nodes list error away, but we can't get away at the same with both that and the pods watcher providers:
kubernetes_leaderelection:
enabled: false
kubernetes:
resources:
pod:
enabled: true
node:
enabled: false |
Ah so there is a bug where I can't disable both only one? It appeared it
also needed get watch and list access for namespaces.
|
then let's try to go even more aggressive on disabling 🙂 providers:
kubernetes_leaderelection:
enabled: false
kubernetes:
add_resource_metadata:
node:
enabled: false
namespace:
enabled: false
deployment: false
cronjob: false
resources:
pod:
enabled: true
node:
enabled: false |
I will try it again when I am back at my computer. So if I want to disable
pods, namespaces, and nodes? Is that doable? Or is there a bug? Essentially
want to just disable the kubernetes provider.
|
From what I am seeing currently no, it's not doable. But hey I missed the However, I am thinking that even if we manage to disable them at the agent-level, the |
I got a chance to test it. Looks like still getting the errors but it could be the add_kubernetes_metadata processor not sure:
|
yep I think this is coming from metricbeat now |
Yes that is the add_kubernetes_metadata processor which currently can't be turned off. This requires #4670. |
Pinging @elastic/elastic-agent-control-plane (Team:Elastic-Agent-Control-Plane) |
A faster way around #4670 since it is complex would be to expose the parts we need to configure via an env var. Something similar was done with the |
We have proposed that the customer set See the last line in this partial snippet for the location of the addition:
|
The above proposed solution solved the customer's problem as removing the service token prevents the provider from starting in the first place |
@pkoutsovasilis Will this issue be completely resolved with #5912 and #5593 or are you expecting to do some more work? |
hey @ycombinator 👋 So the solution to disable everything related to the kubernetes providers is, as @strawgate provided here, to set |
Awesome, thanks @pkoutsovasilis! With those three PRs merged, I am now closing this issue. Should it arise again, we have an answer for customers via the Helm chart improvements you've made. Thanks! |
For confirmed bugs, please report:
Attempting to deploy elastic agent on kubernetes to run threat intel integration or other api integrations. I am receiving errors related to missing permissions in kubernetes. As I am not running the kubernetes integration or monitoring kubernetes itself I shouldn't need access to the kubernetes api server to watch nodes, namespaces, and pods. I also should not need to create a lease without the kubernetes integration.
I am attempting to run fleet managed agents on kubernetes. I have deployed the following yamls:
I have also tried with the configmap:
and:
The first time the pod starts it fails with the following error:
After it restarts the pod runs without lease election but repeatedly throws the below errors:
I would expect the pod to not need to throw an error and restart to disable leader election and to be able to disable these watchers.
The text was updated successfully, but these errors were encountered: