-
Notifications
You must be signed in to change notification settings - Fork 690
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatically builds deb packages in staging environment #1464
Automatically builds deb packages in staging environment #1464
Conversation
Taking a look at this now. |
During local testing I was able to edit staging vars such as version and local templates, then access the Source Interface over an Onion URL running from a local VM: The playbooks should holler at you if you do anything wrong. Some more guiding language about the external ossec repo dependency may be warranted. |
0027096
to
e6724fd
Compare
Rebased on top of latest develop and cleaned up some of the apt/dpkg tasks edits near the end of the commit sequence. Ready for review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Everything looks great and works in practice, however, do want to hear your feedback on the one Ruby syntax comment I made.
Vagrantfile
Outdated
@@ -83,7 +83,7 @@ Vagrant.configure("2") do |config| | |||
'securedrop_application_server' => %(app-staging), | |||
'securedrop_monitor_server' => %(mon-staging), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
%(String)
creates a string in Ruby, in the Vagrant example for generating Inventory groups, they explicitly write out the datatype instead of using this sugar, and use arrays even for singular items. I think it would be better to follow their lead, and s/%/%w
for all these occurrences.
@fowlslegs Updated the Ruby lines in the Vagrantfile as you recommended in your review (cd07f1c). |
Would it be possible to just re-install the app-code package and not the OSSEC packages as well? |
@conorsch and I just talked for a minute, and I expressed concerns about the process of building the OSSEC packages via a separate repo being cumbersome. It also seems possible if not likely that a developer could forget to pull, rebuild, and re-copy-over new OSSEC packages as work happens in that repo. I proposed the possibility of merging that repo into this one. By merging the OSSEC repository into the SD repository, I see us gaining two things:
|
^ This is basically starting a new conversation, even though this is a relevant place to discuss, it should become it's own issue. I'm doing some final testing, then plan to merge this one because it's certainly an improvement from the current status of things, and the OSSEC repo has been very static. |
|
You just forgot to add the new keyring package to the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just giving a "request changes" review to block merging and make sure the issue I reported above gets fixed before we do so:
You just forgot to add the new keyring package to the
local_deb_packages
group vars.
Using Ansible's apt module causes package installation of deb files to be skipped if the version declared in the control file matches what's already installed. That's no good for rapid development, because the source files within the deb package may have changed. Using `dpkg -i` ensures the packages are installed regardless of content changes. However, `dpkg -i` isn't smart enough to resolve dependencies that aren't met, such as `expect` for the ossec packages. So let's first use the `apt` module with the `deb` parameter, then follow up with `dpkg -i` to ensure that rebuilding implements new changes, as well.
The install-local-packages role expects the deb files for ossec in the build/ directory, but those packages aren't created by roles in this repo. If the packages are missing, the error message will instruct developers to check out the docs. Now the docs provide useful info.
The `%()` format declares a string, which works in the case of Vagrant generating an Ansible inventory file, but only due to automagic. Let's explicitly use the `%w()` format to declare a list, in all contexts, even if there's a single group member.
Added role default vars, enforcing the prod-friendly config, i.e. install from the FPF apt repo and configure Apache. Made the associated task file includes optional via var overrides, so that we can skip the apt repo portion under both staging and development, and the apache config portion under development.
The `securedrop-app-code` package requires this package, since the Ansible setup scripts reference it when configuring the application.
Adding all requirements files under the top-level `requirements/` directory in the securedrop install_files dir. Although it's sane to ship only the prod requirements, the app-test logic currently assumes the file will be present in the local docroot, so let's make sure it lands there, even when dynamically building deb packages for staging.
In general, role dependencies via Ansible's meta/main.yml file are tricky to work with. It's often better to declare role includes explicitly in a playbook.
When we released the `securedrop-keyring` package, we added installation of it to the playbook logic, so that instances will get it automatically on both hosts on first run. The FPF repo logic is separate from the install-local-packages logic, and so a tweak is necessary to skip the from-repo installation under the staging environment.
Using a two-pass approach currently, and permitting errors on the first run as an intermediate stage toward getting this behavior more reliable. The `deb` parameter for the Ansible apt module is finicky; ideally it would provide idempotence for us based on package version, but maybe we'll need to upgrade Ansible to 2.x in order to get sane behavior. Punting on delivering all-green playbook output until we have the Ansible version settled.
Overriding at the site level, so we have sane vars populated for the Application and Monitor server. By default, the role will take no action, which is better than erroring out due to undefined variables.
This ensures that the locally built and directly installed (as opposed to via the production apt repository) deb packages are not clobbered by apt upgrades in the staging environment.
The provisioning process for staging now includes automatic building of deb packages, and we don't want those locally built packages clobbered by packages from the prod apt repo. (Use the prod environment for that.)
When given a directory for `dest`, the `get_url` module will *always* redownload the file, because it can't know the intended filename until it makes the request. By providing an explicit dest filepath, Ansible will skip downloading if the file already exists. Note that it won't do checksum comparison on the already downloaded file, which is disappointing, but let's not redownload 42MB file on every playbook run. That's particularly problematic now that the staging environment automatically rebuilds deb packages alongside the playbook, so rerunning the staging playbook is much more likely now.
Only applies to staging hosts, but makes sure the `securedrop-keyring` package is present, and installs it from the locally built deb package. The keyring package isn't at all necessary under staging, since it's used for managing apt keyrings, and the staging machines mostly install from local deb packages, but it's important that we keep the staging environment config as close as possible to the config described by the current state of the git repo, even under development.
For the Travis CI run, we dynamically create a simple inventory file that targets localhost, and call a travis-specific playbook to provision the developer environment within Travis. Long-term, we should aim to reuse the development playbook for testing via Travis, but doing so would require some group_vars -> host_vars conversion. In the meantime, we'll update the inventory generation logic for Travis to match what Vagrant creates via the group memberships.
1788ff5
to
d33f164
Compare
Rebased on top of latest develop. The changes here are largely sound, but the config tests caught that the logic marking the locally built packages as "held", to prevent upgrades from clobbering with repo packages, wasn't working as expected. Will update that logic and confirm tests passing locally prior to merge. (We don't run the staging environment tests in CI yet.) |
Using `apt-mark hold <package>` only applies to apt commands, not aptitude commands, so an aptitude safe-upgrade will still upgrade packages marked as held via apt-mark. For aptitude, we must use `aptitude hold <package>`. Let's do both, to ensure that packages in the staging environment are never clobbered with repository packages.
We're currently pinning a specific version of Firefox for compatibility with Selenium. Previously, multiple runs of the playbook would cause the version of Firefox to get clobbered with whatever is current in the Ubuntu repos; setting the package on hold via `apt-mark` and `aptitude` prevents clobbering.
Not sure how this was missed before; presumably because we aren't running the staging environment in CI yet. The new version in the config test matches what's specified in the `test-requirements.txt` file.
Building the deb packages locally and installing directly revealed some permissions disparity between what's set up via Ansible versus what's enforced by the postinst scripts in the FPF-maintained OSSEC deb packages' postinst scripts. Mostly the permissions are not cleanly declared in the postinst scripts, e.g. `chmod -R 550 <dir>`, which unnecessarily marks files as executable. In practice, the permissions set by Ansible will be overridden by the postinst scripts during the next upgrade, which means there's a window between the first installed version and the first upgraded version where the permissions are not set correctly. We should consider folding the OSSEC build process into the SD repo so we can test these builds better.
Reimplementation of changes presented in #1214. The staging playbook now also includes the build-deb-pkgs playbook, so the latest app code in the local git repo is packaged and installed on staging machines. This is much more intuitive behavior for a "staging" environment than what exists at present: Ansible configs are pulled in from the local git repo, but application code is pulled in from the FPF apt repository (apt.freedom.press), installing whatever the latest production release is.
The prod environment still installs hosted deb packages from apt.freedom.press, as intended. To test:
If you do not bring up the build machine, the playbook will fail and instruct you to do so. Hurray for informative error messages. Also includes a few test updates to track changes brought in via #1445.
Supersedes and therefore closes #1214. Closes #1040.