-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lb_retries apparently not working #6259
Comments
Never mind, it seems this works as expected (or doesn't, rather): https://caddy.community/t/retries-for-post-requests-with-body/23478 |
Okay, I would really like to see this. How much would it cost to sponsor development of this feature, @francislavoie ? |
See https://caddyserver.com/sponsor, you can reach out that way to get a quote. |
I got the email. Thank you for reaching out! While investigating this, I realized this should already work:
I start the proxy, and then I fire off a request:
and then quickly, before the load balancer gives up, I bring up the simulated backend:
But I was getting Anyway, all I had to do was patch half a line of code: 613d544 Seems to work now. |
Actually, just following up with d93e027 which also preserves the body in case one of the backends partially (or totally) depletes it before having the connection severed. |
Awesome, thank you! |
@mholt I noticed two minor issues while playing around with this on 2.7.6 with GET requests:
|
From the official documentation of
Doesn't this kind of defeat its purpose, at least for my use case? |
Are you sure this is because of the retry? Are you not just proxying to another Caddy server?
What do you mean? What did you try to set it to? What's your goal?
In what way? Connection failures are always retryable, because obviously the request wasn't sent anywhere yet. |
Okay, here's a reproduction of the issue: Caddyfile:
I run this with Then I run And 1 second afterwards I would expect to receive a 502 error with curl, even though the proxy went up after a second. That's because the request didn't have "X-Idempotent-Retry: true" as a header. If I would run Aaand you are right about the dual "Server: Caddy" response. That's because I use Caddy as a reverse proxy, obviously - sorry. |
My use case is very similar to this simple reproduction: I have a bunch of Docker containers as proxies and I'd like to have a zero-downtime restart/deployment for requests that are idempotent. But only for those, so that we don't have a retry on a POST that was already partly processed (which could happen now, with this fix/addition). |
Yeah, exactly, you're running Caddy behind Caddy, so of course it'll have two copies of the
So you want to not retry even connection problems? But why? I can't think of any reason why that would be helpful. The logic for retries is here: caddy/modules/caddyhttp/reverseproxy/reverseproxy.go Lines 1013 to 1071 in 6a02999
|
Okay, so the "always retry, no matter what |
Yeah, connection failures (before the HTTP request is written upstream) is always allowed to retry, because we know it's always safe. The |
Thanks for the discussion here -- let me know if there's anything else needed! |
Actually, I do have another suggestion 😆: When the |
The MDN says this status is limited to WebDav usage only. It can also be argued that this status from a reverse-proxy exposes more information than necessary to clients. |
Yeah, actually there's no perfect fit for this. But a |
Perhaps 413 Content Too Large? |
That's probably best. It's still not super obvious for the server admin, but much better than the 502 👍 |
Hmm, I'm not sure if that's expected. 🤔 That's not what happens for me. For me, it still reads the whole body and passes it upstream, it just only buffers up to the buffer size first. |
Okay, interesting! Here's how you can easily reproduce it (on MacOS at least):
Create a big file with dummy data: Start the upstream: Then: With Caddy 2.7.6 I'm getting:
|
@gottlike Interesting, works for me:
(also, Daniel the author of curl would like to remind you that |
Damn, and I omitted the warning so nobody would notice 😋 How can we debug this further? I edited my post and added the Caddy logs, maybe that helps to shed a light? |
@gottlike To clarify, this is separate from retries now? Because my attempt did not employ any retries (the backend was up before running curl, and there is nothing in the config to perform retries). Let me try on a Mac. I'm on Linux currently... |
It's unfortunately working for me on Mac, too:
|
@mholt Yes, just the very simple Caddyfile I posted is enough. The errors are on Linux (x86) and MacOS (ARM) for me. Are you also using 2.7.6, not master or something? |
@gottlike Oh, I am using master, good point. We're hoping to tag 2.8 (beta) this week, can you try master? That should work! |
Ahh.. that might be it! I never built any Go project, so I would like to wait a few days and try again, once the beta release is available - if that's alright :) |
You can grab the CI artifact from here https://github.com/caddyserver/caddy/actions/runs/8808836194#artifacts |
@gottlike No need to wait, please feel free to download our pre-compiled binaries from CI artifacts: https://github.com/caddyserver/caddy/actions/runs/8808836194#artifacts 😁 Edit: Lightning Mohammed beat me to it again! |
Thanks guys! With master it works 🥳 |
I'd like to be able to recreate/restart Docker services that are hooked into Caddy via
reverse_proxy
without downtime. Usually a (POST) request takes about 10 seconds to complete. My config is super simple:When running a bunch of requests and restarting the container before the responses were returned, I always get
502
errors, despite thelb_retries
directive.The error messages look like this:
The text was updated successfully, but these errors were encountered: