Report docker error in logs when docker image goes to ImagePullBackOff, instead of reporting it as a timeout connecting to traffic-manager #2305
Labels
feature
New feature or enhancement request
Use case / problem:
When there are issues starting the traffic manager pod in kubernetes, the connector.log reports a timeout:
…repeated many times…
Then:
Proposed solution
The 'deployment is not ready' / '0 out of 1 pods are ready' line in the output above could be promoted to an error level instead of a debug level, and include an indication of why they aren't ready - i.e. the pod status, perhaps along with a troubleshooting hint about connecting to the ambassador namespace and doing
kubectl describe pod ...
Even better, perhaps the output of describe pod could be included in the logs automatically?
In my case, it was a simple ImagePullBackOff error which was preventing it starting, and when I corrected my proxy details the image downloaded just fine - but it took quite a lot of time to realise this was the issue.
But because standard logging is at 'info' level, this just presents in the normal logs as a timeout which is tricky to diagnose.
Alternatives
Maybe an FAQ section on troubleshooting timeouts when connecting to the cluster, especially if there are other typical problems which users experience, other than this one?
Versions
The text was updated successfully, but these errors were encountered: