-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Health check endpoint when running as sidecar #662
Comments
Reading through the docs on sidecar containers, I wonder have you just tried your deployment as written above? Sidecar containers (and init containers) start before the main container (see the docs). So your application should be able to just connect and the Proxy will be ready. If you do have problems with this approach, though, I'd be curious to hear about it. The Auth Proxy has a built-in alloydb-auth-proxy/cmd/root.go Lines 400 to 427 in 4a501f6
If your deployment above doesn't work (please let me know), then you could use wait like this:
|
Oh wow, this actually worked! Here's what I used:
I can see the connection to health check port in the logs, meaning that it's actually happening. As for the reasoning, and in response to
The issue is that without the health/readiness check of an init container it is considered ready immediately when its process starts up, but it takes another few milliseconds to actually establish the AlloyDB connection. The main container may start up faster and start connecting to alloydb-proxy container before that. This is what I had and this is why I needed a health check endpoint. So, thanks for solving this! |
Nice -- the startup probe configuration looks great. FWIW the Auth Proxy establishes a connection to AlloyDB lazily so I'd expect once the process was up, it would be ready to receive connection attempts. But in any case, if wait is working for you, then I'm happy to hear it. |
Question
I'm running alloydb-auth-proxy as a "native" sidecar container in Kubernetes. My main container needs to connect to AlloyDB immediately at startup. The obvious solution is to wait until alloydb-auth-proxy is ready and then start the main container.
Kubernetes provides a
startupProbe
which can open a TCP connection to the specified port, and I can use that to check if alloydb-auth-proxy has started. But the probe requires the pod to expose the port, and I don't want to expose the port since nobody else needs to connect to it other than the main container in the pod (all traffic between sidecar and main should stay inside the pod).Is there any other way to make a readiness check for alloydb-auth-proxy container? A
/healthz
endpoint maybe? There's no shell so scripting isn't an option.Code
Additional Details
No response
The text was updated successfully, but these errors were encountered: