Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proxy and serve up apt updates for all securedrop instances #2106

Closed
msheiny opened this issue Aug 14, 2017 · 10 comments
Closed

Proxy and serve up apt updates for all securedrop instances #2106

msheiny opened this issue Aug 14, 2017 · 10 comments

Comments

@msheiny
Copy link
Contributor

msheiny commented Aug 14, 2017

Feature request

Description

Having SD instances go out on their own for apt updates is problematic because it has the side-effect of sometimes introducing critical breaking package updates to the ecosystem*. Instead we should be hosting and proxying updates to SecureDrop instances and performing QA when we see new packages hit upstream.

* An example of this happening this week is #2105

User Stories

As a securedrop maintainer, I want to ensure that upstream package updates to not break the securedrop system.

@conorsch
Copy link
Contributor

One year ago, a breaking change in the trusty build for tor also took down SecureDrop instances in one fell swoop: #1364. We definitely need to mirror the Tor Project apt repo and have the servers install from the FPF-controlled mirror, so we can prevent breakage like this going forward.

@ghost
Copy link

ghost commented Aug 14, 2017

It seems reasonable but I have an uneasy feeling about this. We would not want to trade one source for general failure for another. Could we instead find a way to make it so not everything goes down at the same time ? For instance unattended upgrades would not happen right away, they would be delayed 24h. Except for one instance monitored by the FPF. If this instance goes down for some reason, a file can be created somewhere like http://securedrop.org/NO_ATTENDED_UPGRADES. And the unattended upgrades would not happen if this URL returns 200.

I don't like this specific proposal. But maybe there is something sysadmin tend to do to avoid breaking all machines at once.

My 2cts ;-)

@r4v5
Copy link

r4v5 commented Aug 15, 2017

We would not want to trade one source for general failure for another.

In the current status quo, a bad apparmor profile takes down everything. In an FPF repo mirror world, a bad apparmor profile takes down the FPF staging and development envs sometimes at the cost of more manual labor doing package promotion and more bandwidth hosting the entire repository, and the failure mode if the repo experiences downtime is production SD boxes not updating packages for a few days but still operating.
I'd love to see a repo mirror system and a staggered rollout to production, but also recognize that, at the speed the base OS moves, that is a sizeable amount of work to volunteer someone else for.

@msheiny
Copy link
Contributor Author

msheiny commented Aug 15, 2017

the failure mode if the repo experiences downtime is production SD boxes not updating packages for a few days but still operating.

yeah- this is exactly my reasoning. the failure mode of us self-hosting is a lot less catastrophic if the apt repo goes down versus a breaking deb package is pushed out in the current model. I'm only planning to mirror the tor repo right now -- and I got some tricks up my sleeve for hosting ;)

@ghost
Copy link

ghost commented Aug 15, 2017

I acknowledge that a proxy hosted by FPF, with a verification step before any package is pushed to the FPF repository would have saved us from yesterday downtime.

We would not want to trade one source for general failure for another.

To be more precise, it is entirely possible that packages pulled from the FPF repository break the SecureDrop installation, even after being checked to not break the staging environment. Bugs happen ;-)

I think it would be wise to devise a way to not break all installs at the same time, somehow.

@redshiftzero redshiftzero modified the milestones: 0.4.2, 0.4.3 Aug 15, 2017
msheiny added a commit that referenced this issue Aug 15, 2017
This addresses issue #2106 - mitigate side-effects from a tor package
breaking securedrop instances in the field.

The actual logic that creates our mirror is in source control in another repo.
@msheiny msheiny mentioned this issue Aug 15, 2017
2 tasks
msheiny added a commit that referenced this issue Aug 16, 2017
This addresses issue #2106 - mitigate side-effects from a tor package
breaking securedrop instances in the field.

The actual logic that creates our mirror is in source control in another repo.
@ageis
Copy link
Contributor

ageis commented Aug 18, 2017

Seems reasonable. Unattended-upgrades is not even that popular among some other ops shops I've had a chance to look at or hear about in the last 2 yrs due to the very kind of breakage you recently experienced. Let's say you are running a big popular website out of a large rack colocation. In many cases software upgrades are only applied explicitly when a release is made or a state is applied, and in many cases, specific versions of software will be specified (as either a custom repo or those that have had the chance to have been tested). i.e., in Ansible the pattern is usually absent/present/latest (I know you can specify it with an equals sign after the name but in Salt there is explicit aptpkg.version

It makes sense to only put what's tested. Like, what was the priority level of the Tor update? It obviously didn't come through security.ubuntu.org amirite?

Either way it is common for one node in the rack to serve as the apt server via apt-cacher and apt-cacher-ng for the rest of the fleet, so go for it, but of course I'm sure you'll consider that one host a high target so you know what to do. In favor too.

@icco
Copy link
Contributor

icco commented Aug 23, 2017

I came across this tool this morning and thought of this issue: https://github.com/github/aptly/blob/master/README.rst

@conorsch
Copy link
Contributor

@icco Thanks for mentioning aptly, that's actually what we use to host the apt.freedom.press repository at present! For the mirror, we're considering an S3 bucket approach for minimal maintenance burden, using the same pubkey authentication as on the official Tor apt repository, so we don't need to resign releases.

@conorsch
Copy link
Contributor

We have a testing mirror up at http://tor-apt.ops.freedom.press but it's currently failing during install:

E:Release file for http://tor-apt.ops.freedom.press/dists/trusty/InRelease is expired (invalid since 4d 6h 40min 27s). Updates for this repository will not be applied.

Indeed, it's expired:

curl -s http://tor-apt.ops.freedom.press/dists/trusty/InRelease | grep Valid
Valid-Until: Fri, 22 Sep 2017 17:21:56 UTC

We'll need to tweak the hosting parameters to make sure we don't inadvertently break updates for running instances.

conorsch pushed a commit that referenced this issue Sep 28, 2017
This addresses issue #2106 - mitigate side-effects from a tor package
breaking securedrop instances in the field.

The actual logic that creates our mirror is in source control in another repo.
conorsch pushed a commit that referenced this issue Sep 28, 2017
This addresses issue #2106 - mitigate side-effects from a tor package
breaking securedrop instances in the field.

The actual logic that creates our mirror is in source control in another repo.
conorsch pushed a commit that referenced this issue Sep 28, 2017
This addresses issue #2106 - mitigate side-effects from a tor package
breaking securedrop instances in the field.

The actual logic that creates our mirror is in source control in another repo.
conorsch pushed a commit that referenced this issue Sep 28, 2017
This addresses issue #2106 - mitigate side-effects from a tor package
breaking securedrop instances in the field.

The actual logic that creates our mirror is in source control in another repo.
conorsch pushed a commit that referenced this issue Sep 28, 2017
This addresses issue #2106 - mitigate side-effects from a tor package
breaking securedrop instances in the field.

The actual logic that creates our mirror is in source control in another repo.
conorsch pushed a commit that referenced this issue Sep 28, 2017
This addresses issue #2106 - mitigate side-effects from a tor package
breaking securedrop instances in the field.

The actual logic that creates our mirror is in source control in another repo.
@msheiny
Copy link
Contributor Author

msheiny commented Sep 28, 2017

Valid-Until: Fri, 22 Sep 2017 17:21:56 UTC

AWwww man - that is lame :(

To echo what was discussed in SD meeting today. This is going to be a huge problem and we might want to consider using aptly instead here. The difference is that using aptly will require us to utilize SecureDrop's signing key (which is a pain in the butt) BUT this gets us around the expiration problem noted above. Not sure this ticket is the best place to decide which decision makes more sense (since its more of a backend decision) and any decision we make will only affect this PR in a minor way (just slight URL changes).

conorsch pushed a commit that referenced this issue Oct 3, 2017
This addresses issue #2106 - mitigate side-effects from a tor package
breaking securedrop instances in the field.

The actual logic that creates our mirror is in source control in another repo.
conorsch pushed a commit that referenced this issue Oct 13, 2017
This addresses issue #2106 - mitigate side-effects from a tor package
breaking securedrop instances in the field.

The actual logic that creates our mirror is in source control in another repo.
conorsch pushed a commit that referenced this issue Oct 14, 2017
This addresses issue #2106 - mitigate side-effects from a tor package
breaking securedrop instances in the field.

The actual logic that creates our mirror is in source control in another repo.
conorsch pushed a commit that referenced this issue Oct 15, 2017
This addresses issue #2106 - mitigate side-effects from a tor package
breaking securedrop instances in the field.

The actual logic that creates our mirror is in source control in another repo.
conorsch pushed a commit that referenced this issue Nov 27, 2017
This addresses issue #2106 - mitigate side-effects from a tor package
breaking securedrop instances in the field.

The actual logic that creates our mirror is in source control in another repo.
@redshiftzero redshiftzero modified the milestones: 0.5, 0.5.1 Nov 29, 2017
@ghost ghost added the goals: packaging label Dec 4, 2017
conorsch pushed a commit that referenced this issue Dec 22, 2017
This addresses issue #2106 - mitigate side-effects from a tor package
breaking securedrop instances in the field.

The actual logic that creates our mirror is in source control in another repo.
conorsch pushed a commit that referenced this issue Dec 22, 2017
This addresses issue #2106 - mitigate side-effects from a tor package
breaking securedrop instances in the field.

The actual logic that creates our mirror is in source control in another repo.
conorsch pushed a commit that referenced this issue Dec 22, 2017
This addresses issue #2106 - mitigate side-effects from a tor package
breaking securedrop instances in the field.

The actual logic that creates our mirror is in source control in another repo.
msheiny added a commit that referenced this issue Dec 22, 2017
This addresses issue #2106 - mitigate side-effects from a tor package
breaking securedrop instances in the field.

The actual logic that creates our mirror is in source control in another repo.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants