You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
See comment below; requesting more logging and a timestamp on the error result objects. Should ensure that all the code paths are covered to make debugging more clear.
The fact that the error isn't reported in sonobuoy results is already being addressed in another issue.
From pod info, the container exited 1 at
"finishedAt": "2019-09-28T08:19:20Z"
And in the Sonobuoy logs, it stopped waiting for results at:
time="2019-09-28T08:24:20Z" level=info msg="Last update to annotations on exit"
So it did wait 5 more miniutes after the container terminated to see results.
I think more logging was needed here to make this clear. There are logs in place
for when we get results, but apparently the logs are not sufficiently clear when
this error mode occurs.
The error reported mentioned the termination, it would have also been helpful to list the
time that the error message was generated (so I didn't have to cross-reference the other set of logs)
It does seem to be an issue with the upstream issue. Going to take a look there and try
to repro. This sort of thing has happened before unfortunately and caused us to push our own
conformance image to patch the issue.
This is also addressed by #938 since multiple timeout messages were added. That PR also changes a bit of the other timeout logic which I think makes it even more clear. Closing as duplicate.
See comment below; requesting more logging and a timestamp on the error result objects. Should ensure that all the code paths are covered to make debugging more clear.
The fact that the error isn't reported in
sonobuoy results
is already being addressed in another issue.From pod info, the container exited 1 at
And in the Sonobuoy logs, it stopped waiting for results at:
So it did wait 5 more miniutes after the container terminated to see results.
for when we get results, but apparently the logs are not sufficiently clear when
this error mode occurs.
time that the error message was generated (so I didn't have to cross-reference the other set of logs)
to repro. This sort of thing has happened before unfortunately and caused us to push our own
conformance image to patch the issue.
Originally posted by @johnSchnake in #910 (comment)
The text was updated successfully, but these errors were encountered: