Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jobs weird on reload? #831

Closed
embolalia opened this issue Jul 5, 2015 · 6 comments
Closed

Jobs weird on reload? #831

embolalia opened this issue Jul 5, 2015 · 6 comments
Assignees
Labels
Bug Things to squish; generally used for issues
Milestone

Comments

@embolalia
Copy link
Contributor

5.x reload uses bind_commands, which clears the queue. This (I think) cancels all the jobs, meaning all scheduled stuff probably breaks when any module is reloaded. The 6.0 branch doesn't do anything with jobs, so old versions of jobs keep running. I haven't verified this behavior, but it's probably the case.

@embolalia embolalia added the Bug Things to squish; generally used for issues label Jul 5, 2015
@embolalia embolalia modified the milestone: 6.0.0 Jul 5, 2015
embolalia added a commit that referenced this issue Aug 29, 2015
Just a stop-gap to make #831 not a regression so 6.0 can get out the
door. It's not the right solution, but it's slightly better than what
was there before. Only reloading a module with a job actually in it will
kill all the jobs now, rather than any module. So, progress?
@embolalia
Copy link
Contributor Author

Tweaked so that this is no longer any worse than what previous versions had, so this no longer blocks 6.0 from being released. It's still not right, though, so I'm leaving it open.

@embolalia embolalia removed this from the 6.0.0 milestone Aug 29, 2015
@lramati
Copy link
Contributor

lramati commented Aug 30, 2015

Might be a complicated fix, but what if we saved some sort of reference to every job a module sets up so that when the module is unloaded/reloaded we can kill it? add some function to the JobScheduler that takes a Job object (or the ID of said object, or some other unique indicator) and removes all instances of said job from the queue?

fatalis pushed a commit to fatalis/sopel that referenced this issue Jan 7, 2016
Just a stop-gap to make sopel-irc#831 not a regression so 6.0 can get out the
door. It's not the right solution, but it's slightly better than what
was there before. Only reloading a module with a job actually in it will
kill all the jobs now, rather than any module. So, progress?
@dgw
Copy link
Member

dgw commented Mar 29, 2018

Possibly related: #1053, which does away with the clear_jobs() call and loops through the intervals defined on the callable being unregister()ed.

@Exirel
Copy link
Contributor

Exirel commented May 1, 2019

Will be fixed in #1479.

@Exirel Exirel self-assigned this May 1, 2019
@Exirel Exirel closed this as completed May 2, 2019
@Exirel
Copy link
Contributor

Exirel commented May 2, 2019

It was already fixed in 6.5.x (or was it 6.6.x?, @dgw will know) but not yet is fully available in master.

@dgw
Copy link
Member

dgw commented May 2, 2019

It was already fixed in 6.5.x (or was it 6.6.x?, @dgw will know) but not yet fully available in master.

Nope, 6.6.x still clears the whole queue:

sopel/sopel/bot.py

Lines 219 to 222 in 684c0b4

if hasattr(obj, 'interval'):
# TODO this should somehow find the right job to remove, rather than
# clearing the entire queue. Issue #831
self.scheduler.clear_jobs()

Nevertheless, it's fixed on master now with #1479 merged, so… yay.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Things to squish; generally used for issues
Projects
None yet
Development

No branches or pull requests

4 participants