Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate per-source dispVM #333

Open
deeplow opened this issue Nov 13, 2019 · 10 comments
Open

Investigate per-source dispVM #333

deeplow opened this issue Nov 13, 2019 · 10 comments

Comments

@deeplow
Copy link
Contributor

deeplow commented Nov 13, 2019

Description

Following a conversation on gitter, I'd like to propose the idea of per-source dispVMs. If possible, it would allow us to quickly open any other document for the same source as long as the source's original dispVM is still open

User Research Evidence

No user research has been conducted on this, but I would suggest evaluating the user reaction to the relatively long time it takes to open documents.

User Stories

(made up story)

As a journalists I don't like to have to wait (~15 secs - time to open a dispVM) every time I open a new document.

Comments

Security implications: these will have to be well considered, especially the flows of information.

possibly related issues:

@deeplow
Copy link
Contributor Author

deeplow commented Nov 13, 2019

Transcribing relevant gitter chat:

deeplow: I was just trying to think of ways that the opening of dispVMs could be improved for the particular situation of securedrop. Have there ever been any considerations on doing a dispVM per source? Like where opening the first document would take those 15 secs of bootup but everything else would be quick (note: I don't know what is the flexibility for this in Qubes and I'm aware the security implications of this would need to be well considered)
(the above assumes source send typically many documents in a row, which I don't know if it's often the case)

eloquence: Interesting question, I don't know that we've ever discussed a dispvm-per-source. The main security consideration I can think of is that a source's codename may have been compromised, but even so, an exploit would still be very difficult to accomplish given the lack of network-level access inside a dispVM. Curious if @emkll has thoughts on this if he's still around.

There are definitely cases where a source submits many documents, and the performance footprint of launching many individual dispVMs will be significant.

Practically, I can imagine that it would be difficult to determine when to launch and when to shut down dispVMs if it's not tied to an application exiting.

deeplow: Just as a random idea I'm thinking you could tie it to X11 windows open, for example. For other qubes stuff this doesn't work because it can always be a a background process like the safe-pdf conversion. But in this particular case, the journalist will always be wanting to visualize something I think

But in this particular case, the journalist will always be wanting to visualize something I think

To make it more explicit. When there are no more windows open, you shut down the VM. I think both dom0 and the dispVM should be aware of the number of windows open.

eloquence: It's an interesting idea -- you'd still get the performance penalty if you open/close windows one by one, but you'd re-use a per-source VM if you open a bunch of documents in sequence. I defer to the team whether this is technically viable, and acceptable from a security perspective.

deeplow: cool :)
There may be even better ways of doing that. I just though of that right now.

you'd still get the performance penalty if you open/close windows one by one

A timeout between windows closing would probably help on that

emkll: I think one dispvm per source could be interesting, but it might be hard to implement in practice (named dispvms would need to be created/deleted by sd-svs, as named dispvms don't automatically get deleted when they are shutdown, and don't automatically shut down when the program that they are opened with is closed . This means we would need to give sd-svs admin rights via the qubes admin API, and I am not sure if we could granularly restrict those permissions (for example, can sd-svs through the admin API set the netvm?). This could become an exfiltration vector, and could lead to some issues with memory management, as open-in-dvm calls are nice because the VM is shutdown when the application exits.

deeplow: emkll thanks for sharing your thoughts. Those were the kinds of concerns I was referring to.

eloquence: for an unnamed dispvm, would it be viable to launch some "helper" app which passes along file open requests, and when that helper is closed, the dispvm is closed as well? or do we have no way to communicate with that dispvm once it's launched?

deeplow: I was also wondering the same eloquence
the trusted pdf implementation should at least have some kind of communication to send the file to be converted. But I don't know how they did it

redshiftzero it is possible (though would need to be implemented carefully as we would be allowing the potentially compromised dispVM to communicate with the sd-svs VM). it's something that we wanted to do in the first prototype (spawn a dispVM and then make qrexec calls back to the source VM to return status info about submission processing), we ran into an upstream issue which is here QubesOS/qubes-issues#3318 and is still open (this is linked in freedomofpress/securedrop-client#52)

if anyone has any more up to date information on the above lmk

deeplow:

it is possible (though would need to be implemented carefully as we would be allowing the potentially compromised dispVM to communicate with the sd-svs VM)

In this case I don't think there's a need to have communication both ways @redshiftzero . So if it is possible to restrict the communication to be only one way (sd-svs --> dispVM) I think that attack surface could be minimized.

[...]

redshiftzero: woops i missed the prior message from deeplow - ah i was talking about the (simpler to implement case i think) which is tracking the number of open dispVM to prevent the user from opening too many. even if we have per-source VMs (which is a good idea!), we still have to consider the situation where the user can open VMs corresponding to many sources. for tracking the number of open dispVMs from sd-svs then the communication would need to be dispVM -> sd-svs in order for the dispVM to communicate that it's closing to the caller (there might be a smarter thing to do here, thoughts welcome)

@deeplow
Copy link
Contributor Author

deeplow commented Jan 21, 2021

A discussion on the Qubes forum might provide some insights on how this may be acomplished. See it here: https://qubes-os.discourse.group/t/opening-more-programs-in-running-disposable-appvm/2389/4

This is a related (community) documentation piece

@deeplow
Copy link
Contributor Author

deeplow commented Mar 3, 2021

Hi all! I've kept this on the back of my mind (it's more than a year later). But I think I've figured out how this can be done properly with the help for some valuable feedback from Qubes community members.

This is just a POC. If this gets green light I'd like to leave the hardening of it for someone who knows more than me.

In particular, I've addressed the following concern by @emkll:

emkll: I think one dispvm per source could be interesting, but it might be hard to implement in practice (named dispvms would need to be created/deleted by sd-svs, as named dispvms don't automatically get deleted when they are shutdown, and don't automatically shut down when the program that they are opened with is closed . This means we would need to give sd-svs admin rights via the qubes admin API, and I am not sure if we could granularly restrict those permissions (for example, can sd-svs through the admin API set the netvm?).

Now all that is needed it for the sd-app (previous sd-svs, I think) to all a dom0 RPC passing along the source's code name as the parameter.

The "gist of it"

  1. sd-app --sd.CreateSourceDisp--> dom0
    It's a dom0 RPC service that creates a Named DisposableVM which with self-destruct after use. The VM is named after the source's code name (CodeName will be used as an example)
  2. sd-app opens document in disposable CodeName
  3. sd-app opens another document in disposable CodeName
  4. User closes all documents for this source
  5. VM CodeName shuts down automatically as no more documents are open there.

Implementation

Bellow are the VMs where each piece of code would go:

sd-viewer

This disposableVM template will need to have the package qubes-app-shutdown-idle installed. This should shut it down when idle (i.e. no documents open). However, from my testing this is not very accurate. So probably something custom could be implemented to do a better job. In particular, the following comment of mine:

Just as a random idea I'm thinking you could tie it to X11 windows open, for example. For other qubes stuff this doesn't work because it can always be a a background process like the safe-pdf conversion. But in this particular case, the journalist will always be wanting to visualize something I think

Dom0

Having a Qubes RPC in dom0 that creates a named diposable VM named after the source code name. It is set to audo-destruct after use.

File /etc/qubes-rpc/sd.CreateSourceDisp:

#!/bin/sh

# TODO Fully sanitize this $1 parameter to make sure it is only a source's code
# name and nothing else. Otherwise sd-app we will have an RCE in dom0

if [ -z "$1" ]; then
    # This service requires an argument
    exit 1
fi

# create a named disposable VM with the and set it to autodestroy once shut down
/usr/bin/qvm-create --class DispVM sd-viewer-$1 --template=sd-viewer \
--label=red --property auto_cleanup=True

# Give it a tag so we can target the policy from sd-app onto this new VM
/usr/bin/qvm-tags viewer-$1 set sd-viewer-dvm

File /etc/qubes-rpc/policy/sd.CreateSourceDisp:

sd-app	dom0	allow

File /etc/qubes-rpc/policy/qubes.OpenInVM:

sd-app	@tag:sd-viewer-dvm	allow

sd-app

Then the currently implemented way of opening files in a sd-viewer dispVM from the Securedrop Client would need to be replaced with something like the following commands:

qrexec-client-vm dom0 sd.CreateSourceDisp+<SOURCE_CODE_NAME>
qvm-open-in-vm --view-only sd-viewer-<SOURCE_CODE_NAME> <DOCUMENT>

The application doesn't really need to keep track if the named dispVM already exists or not because if it does, the first command will simply fail.

Implications

The dispVM that documents is named after the source's code name. This means that any malicious document can be aware of that. Not sure if it can be a vector for any attack, but thought it was worth mentioning. Can easily be eliminated by storing in sd-app a mapping SourceName<->dispVMNumber.

TODO

  • Harden script
    • regex parameter to make sure only source name is passed
  • Security review

conorsch pushed a commit that referenced this issue Mar 4, 2021
conorsch pushed a commit to freedomofpress/securedrop-client that referenced this issue Mar 4, 2021
@conorsch
Copy link
Contributor

conorsch commented Mar 4, 2021

Thanks for the great write-up, @deeplow. I took this for a spin today, and it's quite promising. Given your detailed description above, I was able to get multiple submissions from a single source to open in the same named DispVM. That certainly greatly improves the time to open a submission after the first. After I manually shut down the VM, it was automatically destroyed, given the auto_cleanup=True property. So far, so good!

My major concern with the proposed change is that the named DispVM does not automatically shut down after all submissions have been closed. I consider that a regression from the current behavior, where DispVMs are automatically shut down and destroyed as soon as a given submission is closed. In the proposed setup, the named DispVM remains running indefinitely. Perhaps we can optimize that behavior. In the forum thread you linked to, there was some discussion of shutting VMs down while idle, but I'm not sure how that would work if submission windows were left open but not interacted with.

For reference, here are the changes I implemented based on your write-up:

The first branch, in this repo, includes a new securedrop-client .deb package built from the securedrop-client branch. So you can checkout 333-per-source-dispvm locally, run make clone && make dev in dom0, and take the changes for a spin.

Looking forward to discussing this more with you, @deeplow! Ideally we'd be able to get the VMs to shut down automatically once the submissions are no longer used, then I think we're in a good position to evaluate more seriously.

@deeplow
Copy link
Contributor Author

deeplow commented Mar 5, 2021

Thanks @conorsch for taking a look and implementing it on a testing branch. Looking good! I'll just have to reconfigure the dev environment (as I've been out of touch with the code).

Perhaps we can optimize that behavior. In the forum thread you linked to, there was some discussion of shutting VMs down while idle, but I'm not sure how that would work if submission windows were left open but not interacted with.

Yes, but from my testing it either takes a long while. But I haven't tested thoroughly or taken a look at the code yet.

But I think there might a better way of approaching this: tracking the the number of windows open. The assumption is that the sd-viewer VMs will only serve the purpose of visualizing information. Therefore one can make a script that periodically (cron) checks if the number of open windows is 0 and if so, it shuts down.

Not sure if "wmctrl", "python-xlib" or another utility is already available in the template, but it is this could be pretty simple. But I don't have so much free time now nor knowledge on window managers, so feel free to take a stab at it if you're interested. Otherwise, I may be able to pick this up at my own pace.

Looking forward to discussing this more with you, @deeplow!

Likewise!

@deeplow
Copy link
Contributor Author

deeplow commented Mar 8, 2021

Not sure if the shipped templates include the x11-utils package. But if it does, know how many windows exist could be as simple as:

n_open_programs=$(xlsclients | wc -l)
if [ $n_open_programs -eq 0 ]; then
  echo "poweroff"
fi

However, putting it in /rw/config/rc.local on the disposableVM template didn't seem to work. But it works just fine it we run it from the terminal. It probably has something to do with the rc.local being running on a non-X-session? No idea. This will require some digging or someone who understands how this works under the hood.

@ninavizz
Copy link
Member

For the Qubes AppMenu user testing, we'll be also testing the concept of giving users the option to open things within currently-running VMs. This will be included both in the interview/session script, and in the survey we'll send out for async feedback. Can post thoughts, here, to tangentially inform how desirable that might be. Obviously not the same scenario, but tangential and related.

@deeplow
Copy link
Contributor Author

deeplow commented Apr 15, 2021

For the Qubes AppMenu user testing, we'll be also testing the concept of giving users the option to open things within currently-running VMs.

Great idea! Looking forward to any ideas on how to convey this to users in a way that is clear. I haven't had any ideas in that direction. it only works well in this specific use-case of the SecureDrop workstation because it pretty much is equivalent to the following two questions:

  1. Which do you prefer: have each document take 15s to open, or have for each source the first take 15s and the others open instantly?
  2. Is it a concern if malware in one of the documents is able see and potentially manipulate other documents the same source has provided? (note that they still won't be able to connect to the internet)

@deeplow
Copy link
Contributor Author

deeplow commented Jun 5, 2021

The implementation of this QubesOS/qubes-issues#5557 will likely make the shutdown dynamic trivial to implement since opening applications in the same disposable qube will be natively supported.

@zenmonkeykstop
Copy link
Contributor

Leaving open for now, it's still relevant.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants