-
Notifications
You must be signed in to change notification settings - Fork 222
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
purge events on stop #552
purge events on stop #552
Conversation
Oh, interesting. I don't think you're using dispatch queues (at least not directly), you're with the cf run-loops. I wonder if the issue is present with them. Do you have very many-watcher open/close tests? Say, a dozen or so watchers created and destroyed very quickly with some events in the background for them may trigger it. (The fault is not deterministic, and I haven't pinned down anything in particular that triggers is, but repeatedly opening and closing watchers with some events for them in the background does make it more likely.) If so, have you ever seen a segfault with a stack like this?
|
We currently have no tests for a ton of files. Might be interesting to reproduce this. |
This might be a dispatch-specific problem. I was unable to reproduce the issue with a minimal FSEvents+CFRunLoop-based implementation or with this library. A minimal-ish FSEvents+Dispatch-implementation seems to reliably reproduce the issue, given enough invocations. There's a file here: https://github.com/e-dant/watcher/blob/release/etc/wip-fsevents-issue/main.cpp which, although there's ample debug logging and commented blocks about, after a few hundred runs and enough events bottled up does seem to hit the issue. As best I can tell, there's some inconsistency between a user asking FSEvents to stop processing events and it actually doing so. Particularly, lots of events over a short period of time being received by very short lived watchers seems to be ill-handled by FSEvents when it's scheduled with dispatch. I think we should revert this PR. If you're interested and if I have some more spare time this weekend, I could get some basic tests for rapidly opening and closes some watchers. If so, do you prefer tests in the module or in the tests directory? |
If possible in the tests directory, what ever works better. Is there a strong reason to revert it ? If it changes to behavior for users, we can leave it as an attempted improvement. |
There are edge-cases, when many events are pending, despite the stream being stopped, that the stream's associated callback will be invoked. Purging events is intended to prevent this.
Discovered and head-banged on this one on my watcher for a while. Eventually fixed here: e-dant/watcher@3a45a7f