-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question]: Bypassing/purging problematic tasks ? #1383
Comments
This problem is due to the fact that you have generated dirty data as a result of multiple reboots, however it does not affect the operation,you can ignore this problem. |
But unfortunately, that one problematic task is clogging up all other new tasks dispatched. It simply goes away by waiting? I ended up purging all the volumes of docker used by RagFlow. That fixed the issue, but of course with that, all the documents are gone which is definitely not a thing to perform if there are already a lot of documents processed in it. |
### What problem does this PR solve? #1383 ### Type of change - [x] New Feature (non-breaking change which adds functionality)
I have the same problem. I solve the problem by deleting data in Redis Finally.
After deleting the data, the parsing process works well. |
Describe your problem
A problematic task (how it's generated is unknown) is clogging up all other new tasks dispatched including non-PDF ones. The problematic task is nowhere to be found to be canceled in the WebUI. Currently the backend is giving out such errors constantly:
docker compose down
thendocker compose up
doesn't resolve the issue.Is there a way to manually remove this problematic task? Additionally, is there a mechanism for task purging/canceling on error internally ?
The text was updated successfully, but these errors were encountered: