-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revert "Emergency fix: Use stable Docker images (#318)" #319
Conversation
This reverts commit 003c019.
@nchaulet This PR should go back to green once the issue around Kibana/Fleet/Agent is solved. |
💚 Build Succeeded
Expand to view the summary
Build stats
Test stats 🧪
Trends 🧪 |
@mtojek @blakerouse from what I tested this is working correctly with agent 8.0.0-SNAPSHOT it's not with 7.13 did we miss a backport somewhere or is there a build that failed? |
@nchaulet I have not found a missing backport, I believe it is a snapshot build issue. |
It seems that there was a successful build yesterday:
|
/test |
/test |
/test |
2 similar comments
/test |
/test |
I think it is the |
If so, then it's breaking for all parties including test environments and e2e tests. |
I think fleet-server.hosts must be used: https://github.com/elastic/kibana/blob/master/x-pack/plugins/fleet/server/index.ts#L58 But the other one is only deprecated 🤔 Let me pull down your code an try it out. |
I tweaked the config based on the file you linked here. Let's see. |
I added the following and removed the kibana part:
It should be tested if it also works with the Kibana one. |
@@ -19,7 +19,7 @@ | |||
"ip": "127.0.0.1" | |||
}, | |||
"event": { | |||
"ingested": "2021-03-18T12:21:57.668559300Z", | |||
"ingested": "2021-04-19T09:58:42.209230300Z", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you know why this is needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I regenerated test results as it's simple for us to operate, but it still failed somewhere. Investigating.
I tried with keeping kibana.host in addition but didn't work. Lets continue the discussion around this in the Kibana issue. Hoping this goes green, we should move forward. @mtojek You mentioned the e2e tests. Do you know where exactly we need to update it? |
I noticed this issue that Manu created: elastic/e2e-testing#1048 |
Yes, I updated the GeoDB references. Let's wait for the CI. |
/test |
The Kubernetes service deployer complains about the unhealthy agent's pod. Investigating. |
I didn't expect this (OOMKilled):
|
If you increase the memory, will it go through? Is this the container with fleet-server or without? |
It went through.
Without. I'm investigating the next level issue. It looks like related to the Kubernetes
Unfortunately that's the price (technical debt) we're paying for not using snapshots immediately after releasing. |
So the failing test is related to the package or to our setup? |
Currently it's the test integration:
|
I wonder how this is related to the "stack" change we made? |
I confirmed it with @ChrsMark that there was a field added in Beats in the mean time. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
/test |
This reverts commit 003c019 - latest emergency fix.
Required changes:
xpack.fleet.agents.fleet_server.hosts
kubernetes.pod