Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[fix] Fix data loss due to internal retries #145

Merged
merged 1 commit into from
Oct 8, 2023

Conversation

gnehil
Copy link
Contributor

@gnehil gnehil commented Oct 8, 2023

Proposed changes

Problem Summary:

After write optimization, the upstream data is read through the iterator. Since the iterator can only traverse in one direction, the current batch cannot be reread during the internal retry.

So the solution is to remove the internal retry, and the failure exception when executing the load will be thrown.
If the spark.task.maxFailures parameter is set (default value is 4), or other retry-related parameters, the Spark scheduler will retry the task.

Other changes:

  1. abort transaction by label when current load is failed
  2. do some style changes

Checklist(Required)

  1. Does it affect the original behavior: (Yes/No/I Don't know)
  2. Has unit tests been added: (Yes/No/No Need)
  3. Has document been added or modified: (Yes/No/No Need)
  4. Does it need to update dependencies: (Yes/No)
  5. Are there any changes that cannot be rolled back: (Yes/No)

Further comments

If this is a relatively large or complex change, kick off the discussion at [email protected] by explaining why you chose the solution you did and what alternatives you considered, etc...

Copy link
Member

@JNSimba JNSimba left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@JNSimba JNSimba merged commit 5410651 into apache:master Oct 8, 2023
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants