Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uploads dom0 RPM package for securedrop-workstation template #251

Merged
merged 4 commits into from
May 15, 2019

Conversation

conorsch
Copy link
Contributor

Provides scripts for uploading the RPM generated by make template to an S3-backed RPM repo. Uses a local container to generate the repo metadata locally, then uploads to S3 (assumes valid AWS credentials).

Note that this implementation clobbers: whatever local repo is generated, that'll be pushed to the remote, with no regard for state maintenance, meaning prior versions of packages will no longer be available. That's fine for our near-term needs with testing, just be aware that changes you push to the remote are not currently version controlled, and are destructive. Stateful handling can be added as part of #157, once we resolve #250.

Testing

Does the RPM repo work?

  1. Clone this branch to dom0, then run make all.
  2. Confirm that you can see the package: sudo qubes-dom0-update --action=search qubes-template-securedrop-workstation
  3. Install the package: sudo qubes-dom0-update qubes-template-securedrop-workstation
  4. Create a new AppVM based on the securedrop-workstation [sic; the Salt-provisioned template is still called sd-workstation] template. Confirm you can log into the VM and update it.
  5. Clean up the AppVM: qvm-remove <vm_name>
  6. Uninstall the package: sudo dnf remove qubes-template-securedrop-workstation

Can you upload packages to the repo?

  1. Confirm you have valid AWS credentials. (@msheiny is a great resource for this.)
  2. Follow the README docs to create an sd-template-builder AppVM, based on fedora-29, and run make template inside that VM to generate an RPM.
  3. Follow the README docs to sign that RPM with the test key.
  4. Copy that signed RPM into rpm-repo/ on in your dev machine (e.g. sd-dev).
  5. Run make publish-rpm.

The script will builder a local container, verify the signatures on the RPM, generate a RPM repo metadata, then upload both the signed RPM and the repo metadata to a remote S3 bucket.

Conor Schaefer added 3 commits April 15, 2019 15:34
Adds gitignore directives to avoid commiting RPM packages.
Builds a local container for running `createrepo_c`, which generates
the RPM repository structure. Also runs `rpm -Kv <rpm>` to validate
signatures on the RPM packages.

Excludes RPM packages when running `make clone` in dom0, because we
don't want to wait for the tar action on a large (~700MB) file every
time.

Pulls in external dependencies via pipenv, specifically for the `awscli`
lib, which gives us access to commands like `aws s3 sync`, which is what
we need to upload the repo contents.
Configures the RPM repository in dom0, which means also configuring it
in sys-firewall (the default UpdateVM for dom0). Both the pubkey
and the repo config itself are populated dynamically in sys-firewall.
These changes do not persist over reboots, which is a problem.
Provides developer-oriented documentation for uploading the signed RPM
packages. The nitty-gritty details of obtaining valid AWS credentials
are glossed over, since the workflow is "talk to ops team," but
otherwise, the process is rather on-the-rails:

  1. Build the RPM (already documented)
  2. Sign the RPM (already documented)
  3. Upload the signed RPM (these docs are new)

We'll likely revise these workflows in the near future, but for now,
we'll continue to work from the docs to build familiarity with the
fundamental actions.
@conorsch conorsch requested review from kushaldas, emkll and rmol April 15, 2019 22:50
@conorsch conorsch changed the title Uploads dom0 RPM package for securedrop-workstationt template Uploads dom0 RPM package for securedrop-workstation template Apr 15, 2019
-----END PGP PUBLIC KEY BLOCK-----
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding the trailing newline was required, otherwise dnf balked, saying the pubkey was invalid.

gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-securedrop-workstation-test
enabled=1
baseurl=https://dev-bin.ops.securedrop.org/dom0-rpm-repo/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @conorsch - you probably want an over-ride variable here, we dont want to target dev-bin.ops.securedrop.org in all scenarios, do we?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now, we only need dev support, but you're right that we'll eventually want to flip this to dev—and preserve the ability for developers to override. Any thoughts on how to do that cleanly in Salt? We're not currently using any vars-based configuration, as we do heavily over in SecureDrop core (https://github.com/freedomofpress/securedrop/).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea @conorsch - look into pillars here. Variable interpolation uses the jinja syntax and you can inject complex jinja logic where-ever (one of the pros/cons in using salt v. ansible).

container_run createrepo_c .

# Push created repo dirtree to S3
aws --profile sdpackager s3 sync \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do you feel about making explicit STS calls (assumerole) here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@msheiny I like that idea—can you recommend CLI additions? Also, @rmol, given that you just ran through the full test plan, can you comment on how troublesome it was to sort out the AWS access, with the script written as-is?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The hardcoded sdpackager profile meant I had to figure out how to construct my config/credentials files to suit the script, which did mean some more manual fumbling.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great feedback, let's aim to clean this up next time another team member runs through the process. Not opposed to adding a CLI flag to the existing script, but we may want to migrate to a more robust Python script in the near future.

@rmol
Copy link
Contributor

rmol commented May 14, 2019

I've made it through the test plan, and was able to upload the RPM.

Does the RPM repo work?

  • Clone this branch to dom0, then run make all.
  • Confirm that you can see the package: sudo qubes-dom0-update --action=search qubes-template-securedrop-workstation
  • Install the package: sudo qubes-dom0-update qubes-template-securedrop-workstation
  • Create a new AppVM based on the securedrop-workstation [sic; the Salt-provisioned template is still called sd-workstation] template. Confirm you can log into the VM and update it.
  • Clean up the AppVM: qvm-remove <vm_name>
  • Uninstall the package: sudo dnf remove qubes-template-securedrop-workstation

Can you upload packages to the repo?

  • Confirm you have valid AWS credentials. (@msheiny is a great resource for this.)
  • Follow the README docs to create an sd-template-builder AppVM, based on fedora-29, and run make template inside that VM to generate an RPM.
  • Follow the README docs to sign that RPM with the test key.
  • Copy that signed RPM into rpm-repo/ on in your dev machine (e.g. sd-dev).
  • Run make publish-rpm.

Copy link
Contributor

@rmol rmol left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Worked for me. I've tried to clarify a few instructions in the README that I tripped over. Please review to make sure that they actually clarify. 😉

it might be worth creating another template derived from
`fedora-29`, into which you can install those extras, and basing
the builder VM on that, or just using a StandaloneVM to save time
and repetition.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally I don't bother to install docker inside the sd-template-builder VM; I simply qvm-move <rpm> the artifact back to my sd-dev environment for upload. Either way, your docs are certainly clearer than what we've had. Let's continue to discuss the optimal workflows here, and improve the docs as we go.

@conorsch
Copy link
Contributor Author

Thanks for detailed review, @rmol! Great docs improvements. We'll likely be iterating on both the docs and the upload functionality in the coming weeks, as we use the workflow more.

@conorsch conorsch merged commit 300816f into master May 15, 2019
@conorsch conorsch deleted the upload-securedrop-workstation-template-rpm branch May 15, 2019 17:36
@conorsch conorsch mentioned this pull request May 15, 2019
8 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Template builder generating non-working TemplateVMs
3 participants