Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build ultimate viewing station interface #415

Closed
micahflee opened this issue Jun 1, 2014 · 11 comments
Closed

Build ultimate viewing station interface #415

micahflee opened this issue Jun 1, 2014 · 11 comments

Comments

@micahflee
Copy link
Contributor

I think we should build a GUI program to be installed in the Tails persistent volume that holds the journalist key. Basically, my hope is to move most of the work that journalists do from their internet-connected Tails to their air-gapped viewing station Tails. The journalist workflow can change to this:

  • On internet-connected Tails, log into SecureDrop
  • Download latest changes file, something like net-to-airgap.securedrop
  • Boot up viewing station Tails, and launch SecureDrop app
  • Import net-to-airgap.securedrop in app, which will load all of the latest changes
  • Use SecureDrop app to look at stuff, write replies (or start writing them and save drafts), star things, search things, etc.
  • Export changes to something like airgap-to-net.securedrop
  • Back on internet Tails, upload airgap-to-net.securedrop to post all replies to source

If we do this, we get some awesome benefits:

  • Everything can be automatically organized. Right now the journalist has to do a massive amount of manual organization, creating intricate file structures, etc.
  • Responses to sources get encrypted on the air-gapped computer, rather than on the connected-to-internet SecureDrop server.
  • Journalists can keep track of what they said in response too. It's hard to follow a one-sided conversation, so this will be helpful.
  • Journalists can keep detailed notes about sources, rename them to their actual names or whatever they want, etc., and it's not a problem because it's all getting saved on the air-gapped computer, never touching a computer with internet access.
  • Journalists can have a usable interface to have per-document discussions (Allow source <-> journalist communcation on a document-by-document basis #391).
  • If we do it right it will be so much simpler for the journalist to use.

I recently have been playing with pywebkitgtk to build a GUI for onionshare using HTML, CSS, and javascript for the GUI, but still being a standalone python program. Basically, you create a GTK window, put a webkit webview in it, and then render an HTML page. This guide was really helpful.

I think by building it as a web app that runs as a program we will have a lot of flexibility, and can make it work really well and look really polished. Optionally we can also run a web server with flask there to serve the websites and use a sqlite database (in onionshare it just serves a straight html file with no web server).

In terms of downloading changes on the internet computer to move to the air-gap viewing station and visa-versa, we obviously need to only download diffs and not everything each time. There are various ways we could go about doing this, but I think we can think of the current state of a securedrop instance sort of like a git repo, and commits would happen each time a source is created, a doc or message is submitted, a reply is sent, or something is deleted. Each of these events happens chronologically, and they can have unique ids.

The securedrop server can keep track of which "commit" the journalist's air-gap is currently on. When they click the download changes button, it can bundle all "commits" from when they last downloaded until the latest commit, and serve them all together. And a similar thing can happen in the other direction, when the journalist exports from the air-gapped GUI app. If there are multiple journalists accessing the same securedrop, merges can be handled automatically. This stuff can all happen under the hood -- the only thing journalists should need to know is that they need to copy latest changes to their air-gapped computer, do some work, then copy latest changes back to their internet computer.

I think this will be a huge long-term project, like maybe part of the 1.0 roadmap. Thoughts?

@fpietrosanti
Copy link

How does the exchange of files (.securedrop) works between the online and air-gapped tails machines?

@micahflee
Copy link
Contributor Author

Current securedrop users need to do this already, but I don't think we have a best practice yet.

Ideally it would be using read-only media, like burning data to CDs, or using sdcards or USB sticks with hardware enforced write-protect switches (using different sdcard or USB stick for transferring files in each direction). But the lower security method would just be to use a normal USB stick.

@diracdeltas
Copy link
Contributor

Copying-and-pasting comments from securedrop-dev:

Chris Palmer wrote:

What security goal does this serve? (I say security goal because
usability goals are better served by using a normal computer.)

@garrettr replied:

As I see them:

* Replies from journalists to sources, which may contain sensitive
information, only exist in plaintext on the airgapped viewing station.

Note that this benefit is only fully realized if we encrypt
end-to-end, which is a key goal of the next generation of SecureDrop.

* Metadata that could be useful for an attacker (submission times,
file sizes, and the results of journalist interactions such as
starring or tagging) can be moved off of the online server or at least
encrypted there, which limits the damage caused by the online server's
compromise.

Finally, I think this actually does improve usability, by
consolidating the journalist's interface into a single location (the
air-gapped machine). Requiring an airgap obviously creates usability
challenges, but we think it is essential for security. In that
context, we can't get around the difficult of requiring humans to
physically round-trip data to the airgap and back, and since they
already view documents on the airgapped machine, why not let them
perform all of their other tasks there as well, in a unified interface?

@micahflee
Copy link
Contributor Author

I've started building the skeleton of this air-gapped user interface here, complete with a system for "installing" it in the Persistence volume: https://github.com/micahflee/securedrop-airgap -- we can move this to freedomofpress/securedrop-airgap at some point.

This requires Tails 1.1 because it depends on wheezy pacakges (Tails 1.1 now has a release date of July 22). For now you can use 1.1 beta to test this out/develop in.

I've also been thinking about journalist workflow and how to make it simpler. As it stands, journalists already need to "install" software in their networked Tails Persistence volume in order to add the HidServAuth line to torrc on boot. So why not make this software do everything we need, rather than just that?

Once there's an air-gapped Tails desktop app that you import data into, all you have to do on the networked Tails is download (and upload) the latest changes. Then you can copy them to (or from) your air-gapped Tails and import them. So how about this idea:

There should also be another piece of software called securedrop-client, which runs on the networked Tails computer. Like securedrop-airgap, it's a GUI app. On first run it lets you set your .onion domain, your HidServAuth token, and your username (for basic auth for Google Auth). It can save this info in a config file (like ~/Persistent/.securedrop-client.conf). You open it and can download the latest changes. Each time you do this, it records the timestamp in .secredrop-client.conf so that the next time you run it it will be able to download the latest changes from the last run. You should also be able to choose a date from a calendar and download changes from that point on too, or just from the beginning of time. And you should also be able to browse for a file to upload updates.

Basically, I don't see any reason to require the journalist to use a web browser, when we can build a simpler, better interface for downloading changes.

In order for this to work, the journalist.py web app needs a new endpoint, something like /download_updates?start_timestamp=xxx. This will return a file that contains all of the updates from start_timestamp until now. (And it will also need /upload_updates.)

We also need a new library that will be included in both the securedrop project and the securedrop-airgap project that handles creating these updates and saving them to a file, and also loading from a file and parsing back into a data structure. Which means we'll need to come up with a file format, and maybe a file format version (in case it changes in the future). To use the git metaphor above, if we want to call a "commit" an "update", then maybe the python module will be called securedrop-updates, and can be packaged as a debian package and included in apt.pressfreedomfoundation.org. This way it can be a dependency of both securedrop and securedrop-airgap.

So to wrap it all up:

  • The journalist interface in securedrop gets a new endpoint, /download_updates, and a new dependency, securedrop-updates. When a journalist loads /download_updates, it uses securedrop-updates to make a new update package, adds one update at a time, and then serializes it into a file and sends it back to the journalist.
  • There will be a new GUI app to be installed in the networked Tails persistent folder called securedrop-client. The journalist configures this to connect to their auth hidden service, and it just lets them download recent updates by loading /download_updates and passing in a timestamp.
  • There will be a new GUI app to be installed in the air-gapped Tails persistent folder called securedrop-airgap. The journalist opens this to see an overview of everything happening in securedrop. When they import updates it refreshes the UI with all of the new updates. When they export updates it exports everything from the last import onward (so new replies to sources, when things get deleted, etc.).
  • Back on the networked Tails, in securedrop-client, when the journalist uploads the changes files, it will make a POST request to /upload_updates in the securedrop journalist web app. When the file is uploaded, the server will use securedrop-updates to load it, parse it into data structures, and then loop through each update doing its action (post reply, delete submission, etc.).

@micahflee
Copy link
Contributor Author

Also, this is all stuff I'm just dreaming up. As it stands securedrop is really easy to use for sources and still painful for journalists. I think 0.3 will help this a lot, but if we can make securedrop-airgap and securedrop-client it will be "easy" for journalists too, at least compared to what they currently have to do.

Please add your own input. This might not be the best approach, but I think it's pretty good.

@dolanjs
Copy link
Contributor

dolanjs commented Jun 17, 2014

Micah with this approach is there a reason to leave a copy of the existing
submissions on the app server? I think if the networked journalist client
downloads and secure deletes the existing submissions it would reduce the
risk if a the app server is compromised leaving only new/future submissions
(since the last time a download happened), the source's gpg key creation
date and any replies that have not been deleted by the source yet.

From a journalist usability perspective I think it makes a lot of sense by
keeping track of the submissions and replies in one interface. And this
approach would also solve organizing and retaining offline backups of
relevant submissions which is currently a manual and messy process once it
gets past a few sources and or multiple separate submissions by the same
source.

Also might be a good idea in the airgapped journalist interface to also
provide a way for the journalist to write encrypted notes (that are not
replies and wont be transfered off the airgapped host) that could be
organized inline with the submissions and replies.
On Jun 16, 2014 4:45 PM, "Micah Lee" [email protected] wrote:

Also, this is all stuff I'm just dreaming up. As it stands securedrop is
really easy to use for sources and still painful for journalists. I think
0.3 will help this a lot, but if we can make securedrop-airgap and
securedrop-client it will be "easy" for journalists too, at least compared
to what they currently have to do.

Please add your own input. This might not be the best approach, but I
think it's pretty good.


Reply to this email directly or view it on GitHub
#415 (comment)
.

@micahflee
Copy link
Contributor Author

Micah with this approach is there a reason to leave a copy of the existing submissions on the app server?

The reasons I can think of are: what about orgs with multiple journalists that check securedrop that are in different geographic locations? If JournoA downloads and deletes all of the updates from the server, then when JournoB logs in there won't be any updates to download.

There also could be the case where it's normally JournoA's job to check securedrop, but for whatever reason they can't do it for a month so JournoB takes over until JournoA is back.

And then finally there's the scary bit -- what if a journalist downloads the updates and then doesn't successfully copy them to the air-gap Tails. Like what if they burn the wrong file to CD and then shut down Tails. By the time they boot up Tails again the file is lost and it can't be redownloaded again. Of course this can be mitigated by making securedrop-client always download files to something like ~/Persistent/securedrop_downloads. Or what if something weird happens where the file that gets downloaded is corrupted, but the server thinks the download completed and goes ahead and deletes stuff.

Also might be a good idea in the airgapped journalist interface to also provide a way for the journalist to write encrypted notes (that are not replies and wont be transfered off the airgapped host) that could be organized inline with the submissions and replies.

I was thinking securedrop-airgap could be as large and featureful as we want, because we can add as much new information as we want (starring sources, giving sources custom aliases, adding notes to everything -- documents, messages, replies, etc.). When you export changes it will only export updates that are actionable by the server (posting replies, deleting things) and the rest of the day won't get exported. This is safe because it's all on an air-gapped computer, and since it's in Tails it's all sitting in an encrypted persistence volume.

But this also has the same problem -- if multiple journalists are using this, each journalist will only see their own extra notes. This part might be fine though -- maybe JournoB doesn't need notes from JournoA's source. But perhaps there can be a way of exporting from securedrop-airgap that also exports all this other stuff, so that file can be directly imported into another securedrop-airgap to keep them all in sync. (And JournoA and JournoB can use onionshare to send those updates to each other, so they don't need to exist on the securedrop server at all.)

But ok tying it all together, how about this for a solution:

When a journalist uses securedrop-client to download updates, it marks all that updates on the server for deletion a week from now (sort of like the option to delete emails later in POP). This way the journalist (or another journalist) can still download them again, before they get auto-deleted.

When exporting data from securedrop-airgap, there can be a normal export and a special checkbox to export all data (sort of like exporting in a PGP keypair in enigmail -- default is just to export public key, but you can check a box to also export secret key). The normal export will only export replies to sources (and perhaps there can be a "delete immediately" option on imported updates that gets exported too). If you check the box, it exports all data, including submissions, replies, notes, labels, etc. This can be used to import into another securedrop-airgap, not to import into securedrop-client. (If you try importing it into securedrop-client, it could silently ignore everything except for replies and only send those to the server.)

This way, data doesn't sit on the securedrop server for more than a week after getting downloaded (and can optionally get deleted right away), and it's possible for journalists to send eachother blocks of updates to keep themselves in sync.

@micahflee
Copy link
Contributor Author

By the way the securedrop-airgap skeleton for a GUI app that's secretly a website is pretty much done:

screenshot from 2014-06-17 17 45 11

Here's the web page that it's rendering: https://github.com/micahflee/securedrop-airgap/blob/master/securedrop_airgap/templates/index.html

And here's the server-side flask web app that's powering it: https://github.com/micahflee/securedrop-airgap/blob/master/securedrop_airgap/webapp.py

@Hainish
Copy link
Contributor

Hainish commented Jun 18, 2014

I generally think it's a great idea! I think it would be good to give journalists a way to write replies in a more secure environment and create tags and make notes in a way that never escapes the airgap, unless exported explicitly. I envision this basically making the hidden-service hosted web ui redundant, used only to sync between instances of the securedrop-client apps with the endpoints you've listed.

You were mentioning the problem of syncing private notes between airgaps. Since the airgapped machines already have the journalist keypair to decrypt messages, we can use this to sign exports from the airgap (in the case of public exports) and encrypt messages to the other airgap containing the same keypair (in the case of private exports). When you move the exports over to the second airgap, both can be verified and the private export can be decrypted. As we move to a system with multiple recipients, this workflow could be amended to sign with journalist a's private key to journalist b, having their public key available. This would even allow collaboration between journalists, not just sharing notes on a source but possibly even selectively sharing documents and allowing collaboration.

I think we'll have to deal with some problems regarding the canonical order of events, though. What if there are two submissions to the /upload_updates endpoint: one which deletes a document from stagepoint a, and another which marks a document as important from stagepoint a. Both are valid individually, but the server has to take the first one it processes and reject or try to merge the second. I could see this easily happening if there are multiple journalists working on the same document pool, both working offline and uploading asynchronously. How do we deal with this merge conflict?

This problem only becomes worse when there's no server to make the final call, as is the case with the private exports. Which sequence of events should be staged? And what if the installed version of securedrop-airgap is different on different airgaps, and the way events are imported changes? What if the airgaps get out of sync because of this? I think we'd need to include with each update some kind of checksum of all data in the current state.

I don't think these problems are show stoppers but I do think they have to be considered carefully before implementation. All in all though I think this is exciting and we should incorporate it on our roadmap!

@micahflee
Copy link
Contributor Author

I envision this basically making the hidden-service hosted web ui redundant, used only to sync between instances of the securedrop-client apps with the endpoints you've listed.

I agree. I think it makes sense to develop this in parallel to the current journalist interface. But once this contains all the functionality of the current journalist interface, we should completely strip out all of that code and just use this instead.

I also think you have some really good ideas about signatures. I think all exports should be encrypted and signed.

The securedrop web app can have its own keypair. When you download updates, the file you download should be encrypted to the air-gapped key and signed with the web app key. When you export from securedrop-airgap to import into securedrop-client, the file exported should be encrypted to the web app key and signed with the air-gapped key. When you export from securedrop-airgap to import into another securedrop-airgap, it should be both encrypted to and signed with the air-gapped key. Software can refuse to import updates that aren't properly signed.

This is especially useful because if removable media, like a burn cd, a usb stick, or an sdcard, with updates on it gets compromised, it's totally worthless without also compromising either the air-gapped key or the web app key (otherwise there would be metadata leakage).

Additionally, I think it's actually safe to use TOFU to import the web app key into securedrop-airgap. Each time you download updates using securedrop-client, the web app can include a copy of its public key. If securedrop-airgap doesn't know what the web app key is, it can just store the first web app key that gets imported. From that point on it can refuse to import updates that are signed with a different key. The installation documentation can just say that after installing all of the securedrop components, download and import updates the first time to initialize securedrop-airgap.

@redshiftzero
Copy link
Contributor

this is now being actively worked on in the qt client repo: https://github.com/freedomofpress/securedrop-client

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

8 participants