-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Right Way to End a Process in Sandbox #1432
Comments
I think the best solution is to use UPD if moveToFailed really works even when the job is active, you can try to create Queue instance in job.js and listen for global failed event here. Then update some global variable which you periodically check in job code. UPD2 Just checked moveToFailed implementation, indeed it should work. |
And if I have multiple processes running with |
I mean you should not kill any processes. Instead, check status from inside of your job code and complete job by exiting from job callback (or calling done()). |
But how can I check inside job.js
Another thing, I tried to do a test by setting job.js
|
Did you tried to import Bull and create a Queue instance in |
thanks for the help, is there any way to capture the queue event without recreate? |
I have some idea but not sure it will work. NodeJS enables some event-driven IPC with |
one thing, although child processes are reused, if you kill one of them, a new one will be created when needed automatically, so it will just incur in a slight performance hit (due to the time needed to spawn a new process). |
I see you mentioned re-creating the queue instance in sandbox will take more redis connection. One thing I want to clarify is, is re-creating the queue of same queue name in sandbox a common practice, when you need access to the queue in sandbox process? Or is there any other way Bull officially recommends? A use case is dispatching (adding) jobs in sandbox process like in this issue. I think #714 is eventually doing the same thing since sandbox processes will load I guess this surprised me a bit that queues needed to be re-created, but after thinking the sandbox process as an entire new process separate from the master process, everything make sense now. There's also a caveat for using sandbox process is that if you try to access For "it would take more Redis connections", I made an experiment and verified the additional redis connections created by re-creating queues in sandbox process. I have three queues in master process (my express server), and when I start w/ no job running yet, I can see 6 client connections via
When I hit my endpoint, which adds a job, now the client list count adds up to 6 plus 4 (2 queues re-created in job) + 2 (2 pubsub clients created in job)
Which matches the expected result. One problem is just from the UI |
I would like to know how to terminate the process of a job in progress through another server. I have an express route that gets the id of a job in progress, is there any official or temporary way to finish the job?
I really need this kind of resource, because my jobs are too consuming, I have no reason to keep a 5-10 minute job running even after the client cancels it.
Server 1 - API Client:
Server 2 - Jobs:
I have some remarks:
1 - Is using a global event to capture the canceled job correct? Will it not cause failures in case of multiple job instances?
2 - How do I cancel the job using
job.queue.childPool
? I tried using process.kill but it ends the task list.The text was updated successfully, but these errors were encountered: