Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use rpm in dom0 for make all #406

Closed
8 tasks done
redshiftzero opened this issue Jan 13, 2020 · 11 comments
Closed
8 tasks done

Use rpm in dom0 for make all #406

redshiftzero opened this issue Jan 13, 2020 · 11 comments

Comments

@redshiftzero
Copy link
Contributor

redshiftzero commented Jan 13, 2020

We have an rpm package that contains the packaged configuration of dom0 but we're not using it. We need three environments:

  • prod, rpm signed with release key (used for provisioning pilot machines and should be the default)
  • staging, rpm signed with test key, used for QA / testing or development on subcomponents of the securedrop workstation where the configurations in dom0 do not need modification
  • dev, does not use rpm (behavior of current make all)

proposal:

  • make all -> will become prod (we can point this to staging until the first production release in the beta series occurs)
  • make staging -> exactly like make all except uses the nightly rpms
  • make dev -> preserves current behavior of make all

Specific next steps

  • Create RPM backend: https://yum.securedrop.org/ (we already have yum-test)
  • Update scripts in LFS repos to perform signing
  • Add conditional Salt logic for dev/prod yum repo URLs, default to prod
  • Add conditional Salt logic for dev/prod apt repo URLs, default to dev
  • Update make targets as described above (all, staging, dev)
  • Provide admin-facing wrapper for enforcing state, e.g. /usr/local/bin/securedrop-install

We'll also need to consolidate apt-test-qubes into the apt-test repo, now that we have dual-channel support, but that should be tracked in a separate ticket, and isn't critical-path for the RPM story.

@conorsch
Copy link
Contributor

make dev -> preserves current behavior of make all, i.e. copies files from the repo to target locations, and ensures the dom0 config RPM is absent

Then this sounds pretty good. Note that pending a solution of the RPM signing story for nightly/staging vs prod, we'll have to update the config logic at least to have different URL endpoints for prod/dev. If we decide to switch to signing repo metadata rather than the RPMs themselves, then we'll have to update the yum configs with gpgcheck=0 and repo_gpgcheck=1.

@eloquence eloquence changed the title use rpm in dom0 for make all Use rpm in dom0 for make all Jan 22, 2020
@emkll emkll self-assigned this Jan 22, 2020
@conorsch
Copy link
Contributor

If we decide to switch to signing repo metadata rather than the RPMs themselves

No, we'll continue to sign the RPMs themselves, because Qubes requires that RPMs be signed directly if used in dom0: https://github.com/QubesOS/qubes-core-admin-linux/blob/9cf273d187f513ae3a6b55ff8c060a9d85714400/dom0-updates/qubes-receive-updates#L101 (hat tip to @kushaldas)

Updated the OP with checklist documenting next steps. Will work on the backend tasks first, to unblock testing by other team members.

@emkll
Copy link
Contributor

emkll commented Jan 23, 2020

Installing the dom0 rpm was failing on latest master, due to missing files. I have made some updates to the Python manifest and RPM spec here: https://github.com/freedomofpress/securedrop-workstation/tree/add-files-rpmspec

On a clean qubes install, after copying a config.json and private key to /srv/salt/sd/ and running ./scripts/provision-all in /usr/share/workstation-dom0-config/, I am happy to report an install that completes successfully.

I have also added the tests/ repository to the RPM, so that we can ensure the system state is more or less expected. However, I ran into some issues with the build process and one of the tests. Reverting 225c688 will allow one to observe the error.

Are there any reasons why we should exclude shipping tests from the RPM? it might be useful for debugging purposes.

Most tests are passing, but some are failing due to file paths of config files not be present. Perhaps we can duplicate some files or adapt the tests to have them working in both rpm and make all scenarios?

@conorsch
Copy link
Contributor

conorsch commented Jan 23, 2020

Great news, @emkll! The test issue you flag is intriguing... I'm in favor of trying to resolve so that we can preserve tests for developers as well as include for at least the upcoming pilot. At the very least, including for the pilot will enable FPF staff and org admins to sanity-check the setup immediately after first install, which is highly valuable.

Most tests are passing, but some are failing due to file paths of config files not be present.

Will need to repro before commenting further, thanks for pointing out the discrepancy.

@conorsch
Copy link
Contributor

https://yum.securedrop.org/ is now live (although it doesn't have any RPMs, given that https://github.com/freedomofpress/securedrop-workstation-prod-rpm-packages-lfs/ is also empty), updated checklist.

@conorsch
Copy link
Contributor

conorsch commented Jan 24, 2020

Started working on the dev/prod conditional logic. Turns out that'll be a more complicated than a separate target: we need a Salt-ish way to choose between different sets of vars. Tried using an include-based approach:

example patch

 $ git diff master
diff --git a/dom0/fpf-apt-test-repo.sls b/dom0/fpf-apt-test-repo.sls
index b96a67f..c7720fb 100644
--- a/dom0/fpf-apt-test-repo.sls
+++ b/dom0/fpf-apt-test-repo.sls
@@ -9,6 +9,7 @@
 #
 include:
   - update.qubes-vm
+  - sd-default-config
 
 # That's right, we need to install a package in order to
 # configure a repo to install another package
@@ -23,9 +24,9 @@ install-python-apt-for-repo-config:
 
 configure-apt-test-apt-repo:
   pkgrepo.managed:
-    - name: "deb [arch=amd64] https://apt-test-qubes.freedom.press {{ grains['oscodename'] }} main"
+    - name: "deb [arch=amd64] {{ sdvars.apt_repo_url }} {{ grains['oscodename'] }} main"
     - file: /etc/apt/sources.list.d/securedrop_workstation.list
-    - key_url: "salt://sd/sd-workstation/apt-test-pubkey.asc"
+    - key_url: "salt://sd/sd-workstation/{{ sdvars.signing_key_filename }}"
     - clean_file: True # squash file to ensure there are no duplicates
     - require:
       - pkg: install-python-apt-for-repo-config
diff --git a/dom0/sd-default-config.sls b/dom0/sd-default-config.sls
new file mode 100644
index 0000000..18d396e
--- /dev/null
+++ b/dom0/sd-default-config.sls
@@ -0,0 +1,23 @@
+# -*- coding: utf-8 -*-
+# vim: set syntax=yaml ts=2 sw=2 sts=2 et :
+
+# DEBUGGING
+{% set sd_env = salt['environ.get']('SECUREDROP_ENV', default='dev') %}
+# See references:
+#
+#   - https://docs.saltstack.com/en/latest/topics/tutorials/states_pt3.html
+#
+
+
+# Example loading taking from Qubes /srv/salt/top.sls
+
+{% load_yaml as sdvars_defaults %}
+{% include "sd-default-config.yml" %}
+{% endload %}
+
+
+{% if sd_env == "prod" %}
+{% set sdvars = sdvars_defaults['prod'] %}
+{% else %}
+{% set sdvars = sdvars_defaults['dev'] %}
+{% endif %}
diff --git a/dom0/sd-default-config.yml b/dom0/sd-default-config.yml
new file mode 100644
index 0000000..0ed6823
--- /dev/null
+++ b/dom0/sd-default-config.yml
@@ -0,0 +1,10 @@
+---
+securedrop_defaults:
+  prod:
+    dom0_yum_repo_url: "https://yum.securedrop.org/workstation/dom0/f25"
+    apt_repo_url: "https://apt.freedom.press"
+    signing_key_filename: "securedrop-release-signing-pubkey.asc"
+  dev:
+    dom0_yum_repo_url: "https://yum-test.securedrop.org/workstation/dom0/f25"
+    apt_repo_url: "https://apt-test-qubes.freedom.press"
+    signing_key_filename: "apt-test-pubkey.asc"
diff --git a/dom0/sd-dom0-files.sls b/dom0/sd-dom0-files.sls
index 4cc38c3..e729ff4 100644
--- a/dom0/sd-dom0-files.sls
+++ b/dom0/sd-dom0-files.sls
@@ -11,6 +11,9 @@ include:
   # The anon-whoni config pulls in sys-whonix and sys-firewall,
   # as well as ensures the latest versions of Whonix are installed.
   - qvm.anon-whonix
+  # import vars
+  - sd-default-config
+
 
 dom0-rpm-test-key:
   file.managed:
@@ -19,7 +22,7 @@ dom0-rpm-test-key:
     # we must place the GPG key inside the fedora-30 TemplateVM, then
     # restart sys-firewall.
     - name: /etc/pki/rpm-gpg/RPM-GPG-KEY-securedrop-workstation-test
-    - source: "salt://sd/sd-workstation/apt-test-pubkey.asc"
+    - source: "salt://sd/sd-workstation/{{ sdvars.signing_key_filename }}"
     - user: root
     - group: root
     - mode: 644
@@ -44,7 +47,7 @@ dom0-workstation-rpm-repo:
         gpgcheck=1
         gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-securedrop-workstation-test
         enabled=1
-        baseurl=https://yum-test.securedrop.org/workstation/dom0/f25
+        baseurl={{ sdvars.dom0_yum_repo_url }}
         name=SecureDrop Workstation Qubes dom0 repo
     - require:
       - file: dom0-rpm-test-key

to no avail. Looks like we must use pillar to pass vars through multiple states. Further reading:

As usual, trawling through /srv/ in dom0 shows some practical examples.

@conorsch
Copy link
Contributor

Looks like we must use pillar to pass vars through multiple states

Had a pairing session with @emkll today, who pointed out some creative ways to import vars without resorting to pillars. See WIP implementation in #432.

Update make targets as described above (all, staging, dev)

The "dev" and "prod" scenarios are rather straightforward, and are already implemented in the #432 WIP PR. The "staging" scenario has been complicated somewhat by the enabling of automatic nightly packages for rpms (#357). We've discussed that CI-built artifacts shouldn't go straight to dom0 on primary developer workstations—but that's precisely what'll happen with the nightly RPM workflow as currently implemented. Until we can discuss further, I've temporarily disabled the pull/fetch logic from the https://yum-test.securedrop.org repo on the backend. RPMs will be appended to https://github.com/freedomofpress/securedrop-workstation-dev-rpm-packages-lfs/ automatically, but they won't serve.

@conorsch
Copy link
Contributor

conorsch commented Jan 27, 2020

We've discussed that CI-built artifacts shouldn't go straight to dom0 on primary developer workstations—but that's precisely what'll happen with the nightly RPM workflow as currently implemented.

After some additional discussion with @emkll, we should be fine to proceed with the "staging" target, given that devs to date will not have installed the RPM via the test repo, given that nightlies only just started being committed. In order to ensure the integrity of packages in dom0, however, we should ensure that the "dev" target removes the dom0 salt rpm if it's already installed. Additionally, let's consider prompting for confirmation that laptop is test-only if the "staging" target is used.

I've re-enabled the automatic publishing on https://yum-test.securedrop.org, so we should see the latest nightly packages appear there shortly.

@eloquence
Copy link
Member

Once #432 is merged, what (if any) follow-up is required to resolve this release blocker?

@emkll
Copy link
Contributor

emkll commented Feb 4, 2020

@eloquence I believe once this is merged we should:

There may be further follow-ups that may come out of the review.

@redshiftzero
Copy link
Contributor Author

we sign the rpm package in dev-lfs CI (ref freedomofpress/securedrop-builder#129) and we'll otherwise need to sign manually, so this is resolved. any further improvements here can go in followups

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants