-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fleet: error: could not decode the response #25405
Comments
Pinging @elastic/fleet (Team:Fleet) |
@blakerouse @nchaulet Would you mind sharing the status? Do you need more reference data/logs? For clarity: due to this issue we are unable to test integrations with latest snapshots (current using ones published a week ago). |
I see that fleet-server is having issues with the following error:
Did some more digging and this was not an error on 8.0. After some more looking I see that Fleet Server missed a backport to 7.x. |
Backport PR - elastic/fleet-server#231 Once landed and new 7.x snapshot, I expect this to be fixed. |
Testing with latest 7.13 snapshot (2021/04/16) and still seeing following error:
|
Are we still seeing this behavior with recent snapshots? If so, @blakerouse @nchaulet any guess on which side this is on? |
I haven't seen it, but this error is so generic that when it bumps up it usually means something really different. Is it possible to improve logging in this case? For example - flush the raw byte content or something (if debug is enabled). |
Also did not see this recently. ++ on improving logging. I filed #25230, please feel free to edit it and add entries with specific things that should be improved. |
It's back - seen with the latest snapshots today. |
I can give you some hints. I think the CI managed to reproduce it if only the Fleet-Server health status fluctuates (see: #25341). |
This comment has been minimized.
This comment has been minimized.
Pinging @elastic/agent (Team:Agent) |
I went ahead and moved this and put it in the agent Iteration board :) |
This is indeed an agent @elastic/agent Anyone could take a look at this one? |
Confirmed that this is no longer happening in local snapshots or nightly builds. Closing, though we can reopen if this is spotted again independent of #25341. |
The goal of the issue to root cause problems with Elastic Agent/Fleet/Kibana with latest snapshots. The stack boots up correctly, then a new policy should be reassigned to the agent, but it seems that it never happens.
Artifacts/logs: https://beats-ci.elastic.co/blue/organizations/jenkins/Ingest-manager%2Felastic-package/detail/PR-319/1/artifacts
We deployed an emergency fix to use last known stable revisions. Here is the PR reverting this change: elastic/elastic-package#319 . This one is expected to be back to green.
It impacts elastic/integrations for ~5 days now.
The text was updated successfully, but these errors were encountered: