-
-
Notifications
You must be signed in to change notification settings - Fork 346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
It'd be nice if we had a test helper for systematically injecting cancellations #77
Comments
I guess this is something that the coverage folks have discussed a bit in the past, e.g. thread from 2010: https://groups.google.com/forum/m/#!topic/coveragepy-dev/pKgslISuFjM We should probably implement the ability to test cancellations before talking to the coverage folks about measuring it, since there's not much point in asking them to implement something that people can't actually use :-). Probably the request would be (a) a configurable (opt-in) way to treat |
The generic stream tests are probably a place that could make use of this. |
Not sure what I was thinking here exactly, but it doesn't make sense to pick random internal cancel scopes and cancel them, since cancel scopes don't spontaneously cancel themselves like that normally. The obvious case is testing external cancellation of some public API, like: cancel_tester = CancelTester()
while cancel_tester.work_to_do():
with open_cancel_scope() as scope:
cancel_tester.set_scope(scope)
await do_http_request()
# make some assertions about how it handled it, check for ResourceWarnings, whatever (This is pretty awkward way of doing the API, ergonomics could be improved, but you get the idea.) The other case that might make sense is for testing internal timeouts, since timeouts can go off semi-randomly. For this we'd want to integrate with MockClock I suppose, like -- check when the next timeout could happen and make time jump forward to that. |
Whatever API we come up with needs to have some provision for doing per-run setup/teardown kind of stuff that happens outside of the cancel-scope-under-test. For example, you might want to fire up an HTTP server, start a client against it, and then cancel the server but not the client, and then make some assertions about what the client sees. And you need a new client each time. |
I think the core change to implement this would be to add an instrument hook for "task is at a cancellation point". Possibly the cleanest way to do this would be to call it from The other way to do it would be to put the instrument hook in Though... that's... actually what happens in both designs. Ugh. I guess we could give This all might require a bit of code rearranging, since Possibly this is also an argument for having some kind of first-class support for |
(Side note: if we add an instrumentation hook for cancel points, then we might also want to think about moving |
It's well known that error handling and cleanup code is notoriously prone to bugs, because it's hard to test and often untested. We have full visibility into the cancellation states of the code we run; it should be possible to provide a pretty sweet fault injection framework to test cancellation handling.
For example, a possible API would be: run this test function repeatedly, while injecting cancellations at different points, until all possible cancellations have been tried, and let any exceptions escape
Technically, this might be implemented as:
There's some subtlety to the choice of key:
It would also be neat if there were a way to teach
coverage
to report on which cancellation branches had been exercised.See also: #239
The text was updated successfully, but these errors were encountered: