-
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for dmypy on Windows #5859
Conversation
Okay, I tested things on 3.4 and 3.7 and it works, so this should be ready for review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I don't think I can finish this review. I hear Sully is looking at it. But I already created some comments so here they are.
No problem, glad to get some feedback! @msullivan happy to talk on Friday about this if you want. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks so much for this!
This all looks pretty plausible, and I think I'm basically happy with the daemonizing approach, but I'm pretty nervous about the named pipe stuff. I think duplicating the serve
function will lead to mistakes and bugs done the road (and the windows version getting broken), especially since we don't actually have any automated tests for this IIRC.
I see two options here:
- Abstract out all the opening/closing/sending logic for windows and unix into a library/object with a consistent interface, so we can have a single
serve
routine. - Just totally punt on using named pipes and switch to using TCP sockets (bound to localhost). There would be some difference between setting it up for domain sockets vs TCP sockets, but most of the socket code doesn't change at all. (Or we could honestly just always use TCP sockets, even on unix.)
My inclination is that we should do the latter, unless there is a major advantage of named pipes that I'm missing?
Yeah, I can definitely understand that. The reason I chose NamedPipes was twofold: a) it is what Steve Dower recommended and b) its what mostly everyone else uses (at least from what I have seen). I am however not totally married to using them.
Yeah, that is okay with me. I think moving entirely to TCP sockets makes the most sense, because then we can just share all IPC code across platforms, which makes the odds of things breaking very low. |
Yeah, in a very real sense they seem like they are the Right Thing for the job. I think the advantage of NamedPipes (and unix domain sockets) is that they have access control on them. This can matter on multi-user machines, but I think that multi-user windows dev machines aren't really a thing?
I am somewhat inclined to keep using unix domain sockets when available, since multi-user Linux machines do still exist? It does mean some more conditional code, though I think a pretty small amount (just starting to listen and establishing the connection; all the communication should be the same). |
Though maybe this is a sign that we should stick with NamedPipe also? Though I think it does need to be abstracted out into a set of communication routines. |
I think in the interest of not making assumptions about a user's environment, erring on the side of caution makes sense here. While multi-user Windows computers are much less common, I don't think we should preclude them. Changes to the actual IPC is likely going to be pretty rare, so I think a wrapper makes the most sense. It gives the added bonus that we can change the underlying implementation later on without needing to change any of the dmypy code. |
I didn't read the code, but I do agree that we should stick with unix domain sockets and named pipes, because of the access control they have. It shouldn't be hard to have a simple abstraction on top of these. (Though perhaps there might be a 3rd party package that does this?) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I have now given all of the feedback I intend to.
Sorry for the slow drip, here. I spent a long time resisting the necessity of sucking it up and reading all the named pipe documentation.
Hm, no idea why the address would not be found in the tests, they are passing locally. I will investigate more tomorrow. |
d75c230
to
e97263b
Compare
I'm chasing a flake in the end-to-end daemon tests on Windows, so I will be causing a bunch of commits trying to find what is possibly a race condition, sorry! |
While thinking about what might cause flakes in #5859, I found a race condition between `dmypy stop` and other `dmypy` commands. `dmypy stop` will send a response and close the socket before the status file has been deleted. This means that there is a window in which the daemon has reported to a client that it has exited but while a status file still exists. This can result in a number of issues, including the daemon appearing to be stuck (instead of stopped) to subsequent commands and also the exiting server deleting the status file of a subsequently started server. The fix is to remove the status file in `cmd_stop` before replying to the request. This ensures that, as far as clients are concerned, the daemon is exited after a stop command completes. I tested the bug and the fix by inserting a `time.sleep(1)` immediately before the `sys.exit(0)` in `serve`: this caused several tests to fail, and the changes fixed them. I believe that the observed flakes in the windows version in #5859 were caused by this issue, but in a way that the unix version was not susceptible to.
While thinking about what might cause flakes in #5859, I found a race condition between `dmypy stop` and other `dmypy` commands. `dmypy stop` will send a response and close the socket before the status file has been deleted. This means that there is a window in which the daemon has reported to a client that it has exited but while a status file still exists. This can result in a number of issues, including the daemon appearing to be stuck (instead of stopped) to subsequent commands and also the exiting server deleting the status file of a subsequently started server. The fix is to remove the status file in `cmd_stop` before replying to the request. This ensures that, as far as clients are concerned, the daemon is exited after a stop command completes. I tested the bug and the fix by inserting a `time.sleep(1)` immediately before the `sys.exit(0)` in `serve`: this caused several tests to fail, and the changes fixed them. I believe that the observed flakes in the windows version in #5859 were caused by this issue, but in a way that the unix version was not susceptible to.
Okay, I have run @msullivan's patch to fix the race condition 9 times on Appveyor and it passed every time. So I think we are finally rid of the flake caused by the race condition 🎉 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a few remaining nits, but this looks great and should be ready to land once addressed.
Thanks for all of the great work on this!
After a while of banging my head against the Windows APIs today, I finally have most of dmypy running on Windows (wow this makes rechecks fast!).
Here is what works and what doesn'tEverything that is publicly documented should work now:
dmypy start
dmypy stop
dmypy restart
dmypy run
dmypy kill
dmypy check
dmypy status
And here is a photo of it working :)
This depends on python/typeshed#2571. Until then selfcheck will fail (on Windows).Closes #5019