-
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
qrexec sometimes leaks disposable VMs #9081
Comments
unlikely, when qrexec-client was called from the python code, the disposable cleanup was handled there (with try/finally)
There is no such thing as automatic dispvm creation (and so - cleanup) by qrexec-daemon for dom0-initiated calls. If nothing else, dom0 doesn't have own qrexec-daemon. As for the actual issue, one option is to restructure the code to not use
|
Indeed, so this is an R4.2 regression.
I thought that Python code that calls
I can try this and see how complex it is. Some of the calls to
This is indeed an option, provided that A better solution would be to change the qubesd API: have the caller of |
Nope, qrexec calls (which admin API conceptually is) do not have this capability. Even if technically it would be possible for a call made from dom0 (as it is implemented right now), lets not abuse the protocol. |
That’s a much better (and cleaner) solution, thank you. Still potentially awkward, and has issues if qubesd gets restarted, but at least not a disgusting, dom0-only hack. Want me to work on this now, or should I focus on other tasks instead? A review on QubesOS/qubes-core-qrexec#138 would be appreciated. It fixes #9036 and #9073 as well as some test suite bugs, and it’s a prerequisite for #9037. |
How to file a helpful issue
Qubes OS release
R4.2 but I suspect this has been a problem since before R4.1.
Brief summary
Various error paths in
qrexec-client
callexit()
without cleaning up disposable VMs. This could be fixed withatexit()
, but this is incompatible with QubesOS/qubes-core-qrexec#136.Steps to reproduce
Not sure. I found this problem by source code inspection. I suspect it can be triggered by causing a vchan I/O error. In the case of dom0-initiated calls, this could also leak Xenstore entries.
Expected behavior
Everything cleaned up.
Actual behavior
Not everything cleaned up.
The text was updated successfully, but these errors were encountered: