Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add package repository scripts, run in CI (staging PoC) #1916

Merged
merged 25 commits into from
Apr 12, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
a289615
Add package repository scripts, run in CI
Mar 2, 2021
a179839
Disable CloudFront caching for repo metadata files
Mar 22, 2021
b4cd8ea
Sync remote to local with s3cmd
Mar 22, 2021
ff02aae
Add some TODOs
Mar 23, 2021
c8a452e
Add k6 logo to index.html generation script
Mar 23, 2021
5a3dee2
Use Packages as main directory instead of dl.k6.io
Mar 23, 2021
ea6e752
Add removal of old package files
Mar 24, 2021
056156d
Sync to S3 in each script
Mar 24, 2021
2b814c6
Use dl.k6.io as default bucket name
Mar 29, 2021
9a12318
Add static title to index.html
Mar 29, 2021
c34fc05
Publish the GPG pub key to S3
Mar 29, 2021
2585456
Add S3 redirect to latest MSI package
Apr 7, 2021
a874b03
Remove old packages based on time
Apr 7, 2021
ffe9ea1
Copy latest MSI package file instead of using redirects
Apr 7, 2021
fed0135
Switch to awscli, trigger CloudFront invalidation
Apr 7, 2021
e694a16
Try S3 redirect again for latest MSI package
Apr 8, 2021
0fe17bf
Also sync the GPG key
Apr 8, 2021
d9600c8
Try to quiet awscli output
Apr 8, 2021
4cc0232
Set a short cache expiration time for index and repo metadata files
Apr 8, 2021
c60ff60
Copy latest MSI package file instead of using redirects... again
Apr 8, 2021
b5c3736
Verify awscli signature before installing
Apr 9, 2021
90da634
Link k6 logo to home
Apr 9, 2021
efab809
Add CI job to publish k6packager Docker image to GHCR
Apr 9, 2021
46e0a1a
Set S3 bucket to the production one
Apr 9, 2021
dec4712
Allow AWSCLI_VERSION to be set from the environment
Apr 12, 2021
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 18 additions & 10 deletions .github/workflows/all.yml
Original file line number Diff line number Diff line change
Expand Up @@ -366,13 +366,15 @@ jobs:
done
hub release create "${assets[@]}" -m "$VERSION" -m "$(cat ./release\ notes/${VERSION}.md)" "$VERSION"

publish-bintray:
publish-packages:
runs-on: ubuntu-latest
needs: [configure, build, package-windows]
if: startsWith(github.ref, 'refs/tags/v')
env:
VERSION: ${{ needs.configure.outputs.version }}
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Download binaries
uses: actions/download-artifact@v2
with:
Expand All @@ -383,13 +385,19 @@ jobs:
with:
name: binaries-windows
path: dist
- name: Upload packages to Bintray
- name: Setup docker-compose environment
run: |
cat > packaging/.env <<EOF
AWS_ACCESS_KEY_ID=${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY=${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION=eu-west-1
AWS_CF_DISTRIBUTION="${{ secrets.AWS_CF_DISTRIBUTION }}"
PGP_SIGN_KEY_PASSPHRASE=${{ secrets.PGP_SIGN_KEY_PASSPHRASE }}
EOF
echo "${{ secrets.PGP_SIGN_KEY }}" > packaging/sign-key.gpg
- name: Publish packages
run: |
curl -fsS -H "X-GPG-PASSPHRASE: ${{ secrets.GPG_PASSPHRASE }}" -T "dist/k6-$VERSION-amd64.deb" \
"https://${{ secrets.BINTRAY_USER }}:${{ secrets.BINTRAY_KEY }}@api.bintray.com/content/loadimpact/deb/k6/${VERSION#v}/k6-${VERSION}-amd64.deb;deb_distribution=stable;deb_component=main;deb_architecture=amd64;publish=1;override=1"
curl -fsS -H "X-GPG-PASSPHRASE: ${{ secrets.GPG_PASSPHRASE }}" -T "dist/k6-$VERSION-amd64.rpm" \
"https://${{ secrets.BINTRAY_USER }}:${{ secrets.BINTRAY_KEY }}@api.bintray.com/content/loadimpact/rpm/k6/${VERSION#v}/k6-${VERSION}-amd64.rpm?publish=1&override=1"
curl -fsS -H "X-GPG-PASSPHRASE: ${{ secrets.GPG_PASSPHRASE }}" -T "dist/k6-$VERSION-win64.msi" \
"https://${{ secrets.BINTRAY_USER }}:${{ secrets.BINTRAY_KEY }}@api.bintray.com/content/loadimpact/windows/k6/${VERSION#v}/k6-${VERSION}-amd64.msi?publish=1&override=1"
curl -fsS -H "X-GPG-PASSPHRASE: ${{ secrets.GPG_PASSPHRASE }}" -T "dist/k6.portable.${VERSION#v}.nupkg" \
"https://${{ secrets.BINTRAY_USER }}:${{ secrets.BINTRAY_KEY }}@api.bintray.com/content/loadimpact/choco/k6.portable/${VERSION#v}/k6.portable.${VERSION}.nupkg?publish=1&override=1"
echo "${{ secrets.CR_PAT }}" | docker login https://ghcr.io -u ${{ github.actor }} --password-stdin
cd packaging
docker-compose pull packager
docker-compose run --rm packager
32 changes: 32 additions & 0 deletions .github/workflows/packager.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
name: k6packager
on:
# Enable manually triggering this workflow via the API or web UI
workflow_dispatch:
schedule:
- cron: '0 0 * * 0' # weekly (Sundays at 00:00)

defaults:
run:
shell: bash

jobs:
publish-packager:
runs-on: ubuntu-latest
env:
VERSION: 0.0.1
AWSCLI_VERSION: 2.1.36
DOCKER_IMAGE_ID: k6io/k6packager
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Build
run: |
cd packaging
docker-compose build packager
- name: Publish
run: |
echo "${{ secrets.CR_PAT }}" | docker login https://ghcr.io -u ${{ github.actor }} --password-stdin
docker tag "$DOCKER_IMAGE_ID" "ghcr.io/${DOCKER_IMAGE_ID}:${VERSION}"
docker push "ghcr.io/${DOCKER_IMAGE_ID}:${VERSION}"
docker tag "$DOCKER_IMAGE_ID" "ghcr.io/${DOCKER_IMAGE_ID}:latest"
docker push "ghcr.io/${DOCKER_IMAGE_ID}:latest"
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -24,4 +24,7 @@
!/vendor/modules.txt
/vendor/**/*.y*ml
/vendor/**/.*.y*ml
/vendor/github.com/dlclark/regexp2/testoutput1
/vendor/github.com/dlclark/regexp2/testoutput1

/packaging/.env
/packaging/*.gpg
35 changes: 35 additions & 0 deletions packaging/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
FROM debian:buster-20210311

LABEL maintainer="k6 Developers <[email protected]>"

ENV DEBIAN_FRONTEND=noninteractive

RUN apt-get update -y && \
apt-get install -y apt-utils createrepo curl git gnupg2 python3 unzip

COPY ./awscli-key.gpg .

ARG AWSCLI_VERSION

# Download awscli, check GPG signature and install.
RUN export GNUPGHOME="$(mktemp -d)" && \
gpg2 --import ./awscli-key.gpg && \
fpr="$(gpg2 --with-colons --fingerprint aws-cli | grep '^fpr' | cut -d: -f10)" && \
gpg2 --export-ownertrust && echo "${fpr}:6:" | gpg2 --import-ownertrust && \
curl -fsSL --remote-name-all \
"https://awscli.amazonaws.com/awscli-exe-linux-x86_64${AWSCLI_VERSION:+-$AWSCLI_VERSION}.zip"{,.sig} && \
gpg2 --verify awscli*.sig awscli*.zip && \
unzip -q awscli*.zip && \
./aws/install && \
rm -rf aws* "$GNUPGHOME"

RUN addgroup --gid 1000 k6 && \
useradd --create-home --shell /bin/bash --no-log-init \
--uid 1000 --gid 1000 k6

COPY bin/ /usr/local/bin/

USER k6
WORKDIR /home/k6

ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
29 changes: 29 additions & 0 deletions packaging/awscli-key.gpg
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----

mQINBF2Cr7UBEADJZHcgusOJl7ENSyumXh85z0TRV0xJorM2B/JL0kHOyigQluUG
ZMLhENaG0bYatdrKP+3H91lvK050pXwnO/R7fB/FSTouki4ciIx5OuLlnJZIxSzx
PqGl0mkxImLNbGWoi6Lto0LYxqHN2iQtzlwTVmq9733zd3XfcXrZ3+LblHAgEt5G
TfNxEKJ8soPLyWmwDH6HWCnjZ/aIQRBTIQ05uVeEoYxSh6wOai7ss/KveoSNBbYz
gbdzoqI2Y8cgH2nbfgp3DSasaLZEdCSsIsK1u05CinE7k2qZ7KgKAUIcT/cR/grk
C6VwsnDU0OUCideXcQ8WeHutqvgZH1JgKDbznoIzeQHJD238GEu+eKhRHcz8/jeG
94zkcgJOz3KbZGYMiTh277Fvj9zzvZsbMBCedV1BTg3TqgvdX4bdkhf5cH+7NtWO
lrFj6UwAsGukBTAOxC0l/dnSmZhJ7Z1KmEWilro/gOrjtOxqRQutlIqG22TaqoPG
fYVN+en3Zwbt97kcgZDwqbuykNt64oZWc4XKCa3mprEGC3IbJTBFqglXmZ7l9ywG
EEUJYOlb2XrSuPWml39beWdKM8kzr1OjnlOm6+lpTRCBfo0wa9F8YZRhHPAkwKkX
XDeOGpWRj4ohOx0d2GWkyV5xyN14p2tQOCdOODmz80yUTgRpPVQUtOEhXQARAQAB
tCFBV1MgQ0xJIFRlYW0gPGF3cy1jbGlAYW1hem9uLmNvbT6JAlQEEwEIAD4WIQT7
Xbd/1cEYuAURraimMQrMRnJHXAUCXYKvtQIbAwUJB4TOAAULCQgHAgYVCgkICwIE
FgIDAQIeAQIXgAAKCRCmMQrMRnJHXJIXEAChLUIkg80uPUkGjE3jejvQSA1aWuAM
yzy6fdpdlRUz6M6nmsUhOExjVIvibEJpzK5mhuSZ4lb0vJ2ZUPgCv4zs2nBd7BGJ
MxKiWgBReGvTdqZ0SzyYH4PYCJSE732x/Fw9hfnh1dMTXNcrQXzwOmmFNNegG0Ox
au+VnpcR5Kz3smiTrIwZbRudo1ijhCYPQ7t5CMp9kjC6bObvy1hSIg2xNbMAN/Do
ikebAl36uA6Y/Uczjj3GxZW4ZWeFirMidKbtqvUz2y0UFszobjiBSqZZHCreC34B
hw9bFNpuWC/0SrXgohdsc6vK50pDGdV5kM2qo9tMQ/izsAwTh/d/GzZv8H4lV9eO
tEis+EpR497PaxKKh9tJf0N6Q1YLRHof5xePZtOIlS3gfvsH5hXA3HJ9yIxb8T0H
QYmVr3aIUes20i6meI3fuV36VFupwfrTKaL7VXnsrK2fq5cRvyJLNzXucg0WAjPF
RrAGLzY7nP1xeg1a0aeP+pdsqjqlPJom8OCWc1+6DWbg0jsC74WoesAqgBItODMB
rsal1y/q+bPzpsnWjzHV8+1/EtZmSc8ZUGSJOPkfC7hObnfkl18h+1QtKTjZme4d
H17gsBJr+opwJw/Zio2LMjQBOqlm3K1A4zFTh7wBC7He6KPQea1p2XAMgtvATtNe
YLZATHZKTJyiqA==
=vYOk
-----END PGP PUBLIC KEY BLOCK-----
116 changes: 116 additions & 0 deletions packaging/bin/create-deb-repo.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
#!/bin/bash
set -eEuo pipefail

# External dependencies:
# - https://salsa.debian.org/apt-team/apt (apt-ftparchive, packaged in apt-utils)
# - https://aws.amazon.com/cli/
# awscli expects AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to be set in the
# environment.
# - https://gnupg.org/
# For signing the script expects the private signing key to already be
# imported and PGPKEYID and PGP_SIGN_KEY_PASSPHRASE to be set in the
# environment.
# - generate_index.py
# For generating the index.html of each directory. It's available in the
# packaging/bin directory of the k6 repo, and should be in $PATH.

_s3bucket="${S3_BUCKET-dl.k6.io}"
_usage="Usage: $0 <pkgdir> <repodir> [s3bucket=${_s3bucket}]"
PKGDIR="${1?${_usage}}" # The directory where .deb files are located
REPODIR="${2?${_usage}}" # The package repository working directory
S3PATH="${3-${_s3bucket}}/deb"
# Remove packages older than N number of days (730 is roughly ~2 years).
REMOVE_PKG_DAYS=730

log() {
echo "$(date -Iseconds) $*"
}

delete_old_pkgs() {
find "$1" -name '*.deb' -type f -daystart -mtime "+${REMOVE_PKG_DAYS}" -print0 | xargs -r0 rm -v

# Remove any dangling .asc files
find "$1" -name '*.asc' -type f -print0 | while read -r -d $'\0' f; do
if ! [ -r "${f%.*}" ]; then
rm -v "$f"
fi
done
}

sync_to_s3() {
log "Syncing to S3 ..."
aws s3 sync --no-progress --delete "${REPODIR}/" "s3://${S3PATH}/"

# Set a short cache expiration for index and repo metadata files.
aws s3 cp --no-progress --recursive \
--exclude='*.deb' --exclude='*.asc' --exclude='*.html' \
--cache-control='max-age=60,must-revalidate' \
--metadata-directive=REPLACE \
"s3://${S3PATH}" "s3://${S3PATH}"
# Set it separately for HTML files to set the correct Content-Type.
aws s3 cp --no-progress --recursive \
--exclude='*' --include='*.html' \
--content-type='text/html' \
--cache-control='max-age=60,must-revalidate' \
--metadata-directive=REPLACE \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does Note that if the object is copied over in parts, from the documentation of this option means ?

I can't find an explanation so I wonder if this is just something that aws s3 cp won't do on it's own? or is it something that has a flag that is just badly documented and me searching ... fails

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, the documentation is a bit confusing... I think this is mostly relevant when doing multipart uploads, but since we're copying from the same bucket and just replacing the metadata, I don't think it matters. I did have to specify content-type manually because otherwise everything gets application/octet-stream :-/

This was more intuitive with s3cmd which had a modify command, so hopefully this copying over doesn't incur additional costs.

"s3://${S3PATH}" "s3://${S3PATH}"
}

# We don't publish i386 packages, but the repo structure is needed for
# compatibility on some systems. See https://unix.stackexchange.com/a/272916 .
architectures="amd64 i386"

pushd . > /dev/null
mkdir -p "$REPODIR" && cd "$_"

for arch in $architectures; do
bindir="dists/stable/main/binary-$arch"
mkdir -p "$bindir"
# Download existing files
aws s3 sync --no-progress --exclude='*' --include='*.deb' --include='*.asc' \
"s3://${S3PATH}/${bindir}/" "$bindir/"

# Copy the new packages in
find "$PKGDIR" -name "*$arch*.deb" -type f -print0 | xargs -r0 cp -t "$bindir"
# Generate signatures for files that don't have it
# TODO: Switch to debsign instead? This is currently done as Bintray did it,
# but the signature is not validated by apt/dpkg.
# https://blog.packagecloud.io/eng/2014/10/28/howto-gpg-sign-verify-deb-packages-apt-repositories/
find "$bindir" -type f -name '*.deb' -print0 | while read -r -d $'\0' f; do
if ! [ -r "${f}.asc" ]; then
gpg2 --default-key="$PGPKEYID" --passphrase="$PGP_SIGN_KEY_PASSPHRASE" \
--pinentry-mode=loopback --yes --detach-sign --armor -o "${f}.asc" "$f"
fi
done
apt-ftparchive packages "$bindir" | tee "$bindir/Packages"
gzip -fk "$bindir/Packages"
bzip2 -fk "$bindir/Packages"

delete_old_pkgs "$bindir"
done

log "Creating release file..."
apt-ftparchive release \
-o APT::FTPArchive::Release::Origin="k6" \
-o APT::FTPArchive::Release::Label="k6" \
-o APT::FTPArchive::Release::Suite="stable" \
-o APT::FTPArchive::Release::Codename="stable" \
-o APT::FTPArchive::Release::Architectures="$architectures" \
-o APT::FTPArchive::Release::Components="main" \
-o APT::FTPArchive::Release::Date="$(date -Ru)" \
"dists/stable" > "dists/stable/Release"

# Sign release file
gpg2 --default-key="$PGPKEYID" --passphrase="$PGP_SIGN_KEY_PASSPHRASE" \
--pinentry-mode=loopback --yes --detach-sign --armor \
-o "dists/stable/Release.gpg" "dists/stable/Release"
gpg2 --default-key="$PGPKEYID" --passphrase="$PGP_SIGN_KEY_PASSPHRASE" \
--pinentry-mode=loopback --yes --clear-sign \
-o "dists/stable/InRelease" "dists/stable/Release"

log "Generating index.html ..."
generate_index.py -r

popd > /dev/null

sync_to_s3
66 changes: 66 additions & 0 deletions packaging/bin/create-msi-repo.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
#!/bin/bash
set -eEuo pipefail

# External dependencies:
# - https://aws.amazon.com/cli/
# awscli expects AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to be set in the
# environment.
# - generate_index.py
# For generating the index.html of each directory. It's available in the
# packaging/bin directory of the k6 repo, and should be in $PATH.

_s3bucket="${S3_BUCKET-dl.k6.io}"
_usage="Usage: $0 <pkgdir> <repodir> [s3bucket=${_s3bucket}]"
PKGDIR="${1?${_usage}}" # The directory where .msi files are located
REPODIR="${2?${_usage}}" # The package repository working directory
S3PATH="${3-${_s3bucket}}/msi"
# Remove packages older than N number of days (730 is roughly ~2 years).
REMOVE_PKG_DAYS=730

log() {
echo "$(date -Iseconds) $*"
}

delete_old_pkgs() {
find "$1" -name '*.msi' -type f -daystart -mtime "+${REMOVE_PKG_DAYS}" -print0 | xargs -r0 rm -v
}

sync_to_s3() {
log "Syncing to S3 ..."
aws s3 sync --no-progress --delete "${REPODIR}/" "s3://${S3PATH}/"

# Set a short cache expiration for index files and the latest MSI package.
aws s3 cp --no-progress --recursive --exclude='*' \
--include='*.html' \
--cache-control='max-age=60,must-revalidate' \
--content-type='text/html' \
--metadata-directive=REPLACE \
"s3://${S3PATH}" "s3://${S3PATH}"
aws s3 cp --no-progress \
--cache-control='max-age=60,must-revalidate' \
--content-type='application/x-msi' \
--metadata-directive=REPLACE \
"s3://${S3PATH}/k6-latest-amd64.msi" "s3://${S3PATH}/k6-latest-amd64.msi"
}

mkdir -p "$REPODIR"

# Download existing packages
# For MSI packages this is only done to be able to generate the index.html correctly.
# Should we fake it and create empty files that have the same timestamp and size as the original ones?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nah, the cost of doing this once every 2 months or so is negligible, especially when you're deleting old packages... Though, if we ever figure it out, we might save some bandwidth by completely not hosting the .msi packages on dl.k6.io and just redirecting to the github release binaries 😅

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmmm, I do wonder if we should not actually drop the whole msi hosting and truly just use github ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'd still need a way to redirect people to the latest msi release (and maybe to the other plain zipped binaries we have), which judging by the "CloudFront caches redirects aggressively and I wasn't able to invalidate it" comment below, probably won't be easy to do if we just host redirects to them on dl.k6.io...

The current solution is good enough, and has a nice side-benefit of having a folder with the old installations at hand, we should leave it be, I think...

Copy link
Contributor Author

@imiric imiric Apr 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this would be good, though instead of dropping it entirely I'd prefer to have S3 redirects to GH like Ned mentioned, so that we can point users to a central location for all packages.

The caching issue wouldn't be a problem for links to specific versions, as they could remain static. But the link to a latest version would be an issue as it needs to be updated, though we could workaround it if we started publishing an MSI without a version in its filename, e.g. k6-amd64.msi. That way the latest link could also be static and redirect to https://github.com/k6io/k6/releases/latest/download/k6-amd64.msi.

Anyways, let's leave it as is for now and consider doing this later. It should be transparent to users if done correctly. 😅

aws s3 sync --no-progress --exclude='*' --include='*.msi' "s3://${S3PATH}/" "$REPODIR/"

# Copy the new packages in
find "$PKGDIR" -name "*.msi" -type f -print0 | xargs -r0 cp -t "$REPODIR"

delete_old_pkgs "$REPODIR"

# Update the latest package. This could be done with S3 redirects, but
# CloudFront caches redirects aggressively and I wasn't able to invalidate it.
latest="$(find "$REPODIR" -name '*.msi' -printf '%P\n' | sort | tail -1)"
cp -p "${REPODIR}/${latest}" "${REPODIR}/k6-latest-amd64.msi"

log "Generating index.html ..."
(cd "$REPODIR" && generate_index.py -r)

sync_to_s3
Loading