This repository has been archived by the owner on Aug 21, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 85
Retry failed chunks #178
Labels
priority/P2
Medium priority issue
Comments
I think this makes sense. It seems not hard to accomplish this and it's helpful for dumping a large database. Maybe dumpling checkpoint can also solve this problem but it's hard to complete that feature now. |
I think we could just retry without adding any config parameters (always use the default values) |
Merged
Implemented in #182 |
@mightyguava After this PR #199 you can set |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Feature Request
Is your feature request related to a problem? Please describe:
For really large databases, the likelihood that a database connection is interrupted during a dump increases. Right now, dumpling doesn't retry and just fails the chunk, logging an error message, and then failing the dump after the entire export is finished.
Describe the feature you'd like:
It would be nice if dumpling could retry with an exponential backoff. I would also include a flag so that dumpling can exit immediately if all retries for a chunk fail.
Here's the proposed new flags and behavior
--retry-chunks=N
where N indicates the number of times to retry each chunk--retry-initial-backoff=D
where D is a duration indicating how long we wait before the first retry, doubling it each subsequent retry.--fail-fast
where if a single chunk fails after all retries, the export is immediately abortedFor us, where we use the RDS IAM Auth Token to authenticate, this poses an additional challenge that the token is only valid for 15 minutes after being created. A retry on a failed connection hours later would not be able to authenticate, so we'd need support in dumpling to fetch a new token rather than having it provided. I'll file a separate ticket for this.
The text was updated successfully, but these errors were encountered: