-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add package repository scripts, run in CI (staging PoC) #1916
Changes from all commits
a289615
a179839
b4cd8ea
ff02aae
c8a452e
5a3dee2
ea6e752
056156d
2b814c6
9a12318
c34fc05
2585456
a874b03
ffe9ea1
fed0135
e694a16
0fe17bf
d9600c8
4cc0232
c60ff60
b5c3736
90da634
efab809
46e0a1a
dec4712
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,32 @@ | ||
name: k6packager | ||
on: | ||
# Enable manually triggering this workflow via the API or web UI | ||
workflow_dispatch: | ||
schedule: | ||
- cron: '0 0 * * 0' # weekly (Sundays at 00:00) | ||
|
||
defaults: | ||
run: | ||
shell: bash | ||
|
||
jobs: | ||
publish-packager: | ||
runs-on: ubuntu-latest | ||
env: | ||
VERSION: 0.0.1 | ||
AWSCLI_VERSION: 2.1.36 | ||
DOCKER_IMAGE_ID: k6io/k6packager | ||
steps: | ||
- name: Checkout code | ||
uses: actions/checkout@v2 | ||
- name: Build | ||
run: | | ||
cd packaging | ||
docker-compose build packager | ||
- name: Publish | ||
run: | | ||
echo "${{ secrets.CR_PAT }}" | docker login https://ghcr.io -u ${{ github.actor }} --password-stdin | ||
docker tag "$DOCKER_IMAGE_ID" "ghcr.io/${DOCKER_IMAGE_ID}:${VERSION}" | ||
docker push "ghcr.io/${DOCKER_IMAGE_ID}:${VERSION}" | ||
docker tag "$DOCKER_IMAGE_ID" "ghcr.io/${DOCKER_IMAGE_ID}:latest" | ||
docker push "ghcr.io/${DOCKER_IMAGE_ID}:latest" |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,35 @@ | ||
FROM debian:buster-20210311 | ||
|
||
LABEL maintainer="k6 Developers <[email protected]>" | ||
|
||
ENV DEBIAN_FRONTEND=noninteractive | ||
|
||
RUN apt-get update -y && \ | ||
apt-get install -y apt-utils createrepo curl git gnupg2 python3 unzip | ||
|
||
COPY ./awscli-key.gpg . | ||
|
||
ARG AWSCLI_VERSION | ||
|
||
# Download awscli, check GPG signature and install. | ||
RUN export GNUPGHOME="$(mktemp -d)" && \ | ||
gpg2 --import ./awscli-key.gpg && \ | ||
fpr="$(gpg2 --with-colons --fingerprint aws-cli | grep '^fpr' | cut -d: -f10)" && \ | ||
gpg2 --export-ownertrust && echo "${fpr}:6:" | gpg2 --import-ownertrust && \ | ||
curl -fsSL --remote-name-all \ | ||
"https://awscli.amazonaws.com/awscli-exe-linux-x86_64${AWSCLI_VERSION:+-$AWSCLI_VERSION}.zip"{,.sig} && \ | ||
gpg2 --verify awscli*.sig awscli*.zip && \ | ||
unzip -q awscli*.zip && \ | ||
./aws/install && \ | ||
rm -rf aws* "$GNUPGHOME" | ||
|
||
RUN addgroup --gid 1000 k6 && \ | ||
useradd --create-home --shell /bin/bash --no-log-init \ | ||
--uid 1000 --gid 1000 k6 | ||
|
||
COPY bin/ /usr/local/bin/ | ||
|
||
USER k6 | ||
WORKDIR /home/k6 | ||
|
||
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"] |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,29 @@ | ||
-----BEGIN PGP PUBLIC KEY BLOCK----- | ||
|
||
mQINBF2Cr7UBEADJZHcgusOJl7ENSyumXh85z0TRV0xJorM2B/JL0kHOyigQluUG | ||
ZMLhENaG0bYatdrKP+3H91lvK050pXwnO/R7fB/FSTouki4ciIx5OuLlnJZIxSzx | ||
PqGl0mkxImLNbGWoi6Lto0LYxqHN2iQtzlwTVmq9733zd3XfcXrZ3+LblHAgEt5G | ||
TfNxEKJ8soPLyWmwDH6HWCnjZ/aIQRBTIQ05uVeEoYxSh6wOai7ss/KveoSNBbYz | ||
gbdzoqI2Y8cgH2nbfgp3DSasaLZEdCSsIsK1u05CinE7k2qZ7KgKAUIcT/cR/grk | ||
C6VwsnDU0OUCideXcQ8WeHutqvgZH1JgKDbznoIzeQHJD238GEu+eKhRHcz8/jeG | ||
94zkcgJOz3KbZGYMiTh277Fvj9zzvZsbMBCedV1BTg3TqgvdX4bdkhf5cH+7NtWO | ||
lrFj6UwAsGukBTAOxC0l/dnSmZhJ7Z1KmEWilro/gOrjtOxqRQutlIqG22TaqoPG | ||
fYVN+en3Zwbt97kcgZDwqbuykNt64oZWc4XKCa3mprEGC3IbJTBFqglXmZ7l9ywG | ||
EEUJYOlb2XrSuPWml39beWdKM8kzr1OjnlOm6+lpTRCBfo0wa9F8YZRhHPAkwKkX | ||
XDeOGpWRj4ohOx0d2GWkyV5xyN14p2tQOCdOODmz80yUTgRpPVQUtOEhXQARAQAB | ||
tCFBV1MgQ0xJIFRlYW0gPGF3cy1jbGlAYW1hem9uLmNvbT6JAlQEEwEIAD4WIQT7 | ||
Xbd/1cEYuAURraimMQrMRnJHXAUCXYKvtQIbAwUJB4TOAAULCQgHAgYVCgkICwIE | ||
FgIDAQIeAQIXgAAKCRCmMQrMRnJHXJIXEAChLUIkg80uPUkGjE3jejvQSA1aWuAM | ||
yzy6fdpdlRUz6M6nmsUhOExjVIvibEJpzK5mhuSZ4lb0vJ2ZUPgCv4zs2nBd7BGJ | ||
MxKiWgBReGvTdqZ0SzyYH4PYCJSE732x/Fw9hfnh1dMTXNcrQXzwOmmFNNegG0Ox | ||
au+VnpcR5Kz3smiTrIwZbRudo1ijhCYPQ7t5CMp9kjC6bObvy1hSIg2xNbMAN/Do | ||
ikebAl36uA6Y/Uczjj3GxZW4ZWeFirMidKbtqvUz2y0UFszobjiBSqZZHCreC34B | ||
hw9bFNpuWC/0SrXgohdsc6vK50pDGdV5kM2qo9tMQ/izsAwTh/d/GzZv8H4lV9eO | ||
tEis+EpR497PaxKKh9tJf0N6Q1YLRHof5xePZtOIlS3gfvsH5hXA3HJ9yIxb8T0H | ||
QYmVr3aIUes20i6meI3fuV36VFupwfrTKaL7VXnsrK2fq5cRvyJLNzXucg0WAjPF | ||
RrAGLzY7nP1xeg1a0aeP+pdsqjqlPJom8OCWc1+6DWbg0jsC74WoesAqgBItODMB | ||
rsal1y/q+bPzpsnWjzHV8+1/EtZmSc8ZUGSJOPkfC7hObnfkl18h+1QtKTjZme4d | ||
H17gsBJr+opwJw/Zio2LMjQBOqlm3K1A4zFTh7wBC7He6KPQea1p2XAMgtvATtNe | ||
YLZATHZKTJyiqA== | ||
=vYOk | ||
-----END PGP PUBLIC KEY BLOCK----- |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,116 @@ | ||
#!/bin/bash | ||
set -eEuo pipefail | ||
|
||
# External dependencies: | ||
# - https://salsa.debian.org/apt-team/apt (apt-ftparchive, packaged in apt-utils) | ||
# - https://aws.amazon.com/cli/ | ||
# awscli expects AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to be set in the | ||
# environment. | ||
# - https://gnupg.org/ | ||
# For signing the script expects the private signing key to already be | ||
# imported and PGPKEYID and PGP_SIGN_KEY_PASSPHRASE to be set in the | ||
# environment. | ||
# - generate_index.py | ||
# For generating the index.html of each directory. It's available in the | ||
# packaging/bin directory of the k6 repo, and should be in $PATH. | ||
|
||
_s3bucket="${S3_BUCKET-dl.k6.io}" | ||
_usage="Usage: $0 <pkgdir> <repodir> [s3bucket=${_s3bucket}]" | ||
PKGDIR="${1?${_usage}}" # The directory where .deb files are located | ||
REPODIR="${2?${_usage}}" # The package repository working directory | ||
S3PATH="${3-${_s3bucket}}/deb" | ||
# Remove packages older than N number of days (730 is roughly ~2 years). | ||
REMOVE_PKG_DAYS=730 | ||
|
||
log() { | ||
echo "$(date -Iseconds) $*" | ||
} | ||
|
||
delete_old_pkgs() { | ||
find "$1" -name '*.deb' -type f -daystart -mtime "+${REMOVE_PKG_DAYS}" -print0 | xargs -r0 rm -v | ||
|
||
# Remove any dangling .asc files | ||
find "$1" -name '*.asc' -type f -print0 | while read -r -d $'\0' f; do | ||
if ! [ -r "${f%.*}" ]; then | ||
rm -v "$f" | ||
fi | ||
done | ||
} | ||
|
||
sync_to_s3() { | ||
log "Syncing to S3 ..." | ||
aws s3 sync --no-progress --delete "${REPODIR}/" "s3://${S3PATH}/" | ||
|
||
# Set a short cache expiration for index and repo metadata files. | ||
aws s3 cp --no-progress --recursive \ | ||
--exclude='*.deb' --exclude='*.asc' --exclude='*.html' \ | ||
--cache-control='max-age=60,must-revalidate' \ | ||
--metadata-directive=REPLACE \ | ||
"s3://${S3PATH}" "s3://${S3PATH}" | ||
# Set it separately for HTML files to set the correct Content-Type. | ||
aws s3 cp --no-progress --recursive \ | ||
--exclude='*' --include='*.html' \ | ||
--content-type='text/html' \ | ||
--cache-control='max-age=60,must-revalidate' \ | ||
--metadata-directive=REPLACE \ | ||
"s3://${S3PATH}" "s3://${S3PATH}" | ||
} | ||
|
||
# We don't publish i386 packages, but the repo structure is needed for | ||
# compatibility on some systems. See https://unix.stackexchange.com/a/272916 . | ||
architectures="amd64 i386" | ||
|
||
pushd . > /dev/null | ||
mkdir -p "$REPODIR" && cd "$_" | ||
|
||
for arch in $architectures; do | ||
bindir="dists/stable/main/binary-$arch" | ||
mkdir -p "$bindir" | ||
# Download existing files | ||
aws s3 sync --no-progress --exclude='*' --include='*.deb' --include='*.asc' \ | ||
"s3://${S3PATH}/${bindir}/" "$bindir/" | ||
|
||
# Copy the new packages in | ||
find "$PKGDIR" -name "*$arch*.deb" -type f -print0 | xargs -r0 cp -t "$bindir" | ||
# Generate signatures for files that don't have it | ||
# TODO: Switch to debsign instead? This is currently done as Bintray did it, | ||
# but the signature is not validated by apt/dpkg. | ||
# https://blog.packagecloud.io/eng/2014/10/28/howto-gpg-sign-verify-deb-packages-apt-repositories/ | ||
find "$bindir" -type f -name '*.deb' -print0 | while read -r -d $'\0' f; do | ||
if ! [ -r "${f}.asc" ]; then | ||
gpg2 --default-key="$PGPKEYID" --passphrase="$PGP_SIGN_KEY_PASSPHRASE" \ | ||
--pinentry-mode=loopback --yes --detach-sign --armor -o "${f}.asc" "$f" | ||
fi | ||
done | ||
apt-ftparchive packages "$bindir" | tee "$bindir/Packages" | ||
gzip -fk "$bindir/Packages" | ||
bzip2 -fk "$bindir/Packages" | ||
|
||
delete_old_pkgs "$bindir" | ||
done | ||
|
||
log "Creating release file..." | ||
apt-ftparchive release \ | ||
-o APT::FTPArchive::Release::Origin="k6" \ | ||
-o APT::FTPArchive::Release::Label="k6" \ | ||
-o APT::FTPArchive::Release::Suite="stable" \ | ||
-o APT::FTPArchive::Release::Codename="stable" \ | ||
-o APT::FTPArchive::Release::Architectures="$architectures" \ | ||
-o APT::FTPArchive::Release::Components="main" \ | ||
-o APT::FTPArchive::Release::Date="$(date -Ru)" \ | ||
"dists/stable" > "dists/stable/Release" | ||
|
||
# Sign release file | ||
gpg2 --default-key="$PGPKEYID" --passphrase="$PGP_SIGN_KEY_PASSPHRASE" \ | ||
--pinentry-mode=loopback --yes --detach-sign --armor \ | ||
-o "dists/stable/Release.gpg" "dists/stable/Release" | ||
gpg2 --default-key="$PGPKEYID" --passphrase="$PGP_SIGN_KEY_PASSPHRASE" \ | ||
--pinentry-mode=loopback --yes --clear-sign \ | ||
-o "dists/stable/InRelease" "dists/stable/Release" | ||
|
||
log "Generating index.html ..." | ||
generate_index.py -r | ||
|
||
popd > /dev/null | ||
|
||
sync_to_s3 |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,66 @@ | ||
#!/bin/bash | ||
set -eEuo pipefail | ||
|
||
# External dependencies: | ||
# - https://aws.amazon.com/cli/ | ||
# awscli expects AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to be set in the | ||
# environment. | ||
# - generate_index.py | ||
# For generating the index.html of each directory. It's available in the | ||
# packaging/bin directory of the k6 repo, and should be in $PATH. | ||
|
||
_s3bucket="${S3_BUCKET-dl.k6.io}" | ||
_usage="Usage: $0 <pkgdir> <repodir> [s3bucket=${_s3bucket}]" | ||
PKGDIR="${1?${_usage}}" # The directory where .msi files are located | ||
REPODIR="${2?${_usage}}" # The package repository working directory | ||
S3PATH="${3-${_s3bucket}}/msi" | ||
# Remove packages older than N number of days (730 is roughly ~2 years). | ||
REMOVE_PKG_DAYS=730 | ||
|
||
log() { | ||
echo "$(date -Iseconds) $*" | ||
} | ||
|
||
delete_old_pkgs() { | ||
find "$1" -name '*.msi' -type f -daystart -mtime "+${REMOVE_PKG_DAYS}" -print0 | xargs -r0 rm -v | ||
} | ||
|
||
sync_to_s3() { | ||
log "Syncing to S3 ..." | ||
aws s3 sync --no-progress --delete "${REPODIR}/" "s3://${S3PATH}/" | ||
|
||
# Set a short cache expiration for index files and the latest MSI package. | ||
aws s3 cp --no-progress --recursive --exclude='*' \ | ||
--include='*.html' \ | ||
--cache-control='max-age=60,must-revalidate' \ | ||
--content-type='text/html' \ | ||
--metadata-directive=REPLACE \ | ||
"s3://${S3PATH}" "s3://${S3PATH}" | ||
aws s3 cp --no-progress \ | ||
--cache-control='max-age=60,must-revalidate' \ | ||
--content-type='application/x-msi' \ | ||
--metadata-directive=REPLACE \ | ||
"s3://${S3PATH}/k6-latest-amd64.msi" "s3://${S3PATH}/k6-latest-amd64.msi" | ||
} | ||
|
||
mkdir -p "$REPODIR" | ||
|
||
# Download existing packages | ||
# For MSI packages this is only done to be able to generate the index.html correctly. | ||
# Should we fake it and create empty files that have the same timestamp and size as the original ones? | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Nah, the cost of doing this once every 2 months or so is negligible, especially when you're deleting old packages... Though, if we ever figure it out, we might save some bandwidth by completely not hosting the .msi packages on There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. hmmm, I do wonder if we should not actually drop the whole msi hosting and truly just use github ? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We'd still need a way to redirect people to the latest msi release (and maybe to the other plain zipped binaries we have), which judging by the "CloudFront caches redirects aggressively and I wasn't able to invalidate it" comment below, probably won't be easy to do if we just host redirects to them on dl.k6.io... The current solution is good enough, and has a nice side-benefit of having a folder with the old installations at hand, we should leave it be, I think... There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah, this would be good, though instead of dropping it entirely I'd prefer to have S3 redirects to GH like Ned mentioned, so that we can point users to a central location for all packages. The caching issue wouldn't be a problem for links to specific versions, as they could remain static. But the link to a latest version would be an issue as it needs to be updated, though we could workaround it if we started publishing an MSI without a version in its filename, e.g. Anyways, let's leave it as is for now and consider doing this later. It should be transparent to users if done correctly. 😅 |
||
aws s3 sync --no-progress --exclude='*' --include='*.msi' "s3://${S3PATH}/" "$REPODIR/" | ||
|
||
# Copy the new packages in | ||
find "$PKGDIR" -name "*.msi" -type f -print0 | xargs -r0 cp -t "$REPODIR" | ||
|
||
delete_old_pkgs "$REPODIR" | ||
|
||
# Update the latest package. This could be done with S3 redirects, but | ||
# CloudFront caches redirects aggressively and I wasn't able to invalidate it. | ||
latest="$(find "$REPODIR" -name '*.msi' -printf '%P\n' | sort | tail -1)" | ||
cp -p "${REPODIR}/${latest}" "${REPODIR}/k6-latest-amd64.msi" | ||
|
||
log "Generating index.html ..." | ||
(cd "$REPODIR" && generate_index.py -r) | ||
|
||
sync_to_s3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what does
Note that if the object is copied over in parts,
from the documentation of this option means ?I can't find an explanation so I wonder if this is just something that
aws s3 cp
won't do on it's own? or is it something that has a flag that is just badly documented and me searching ... failsThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, the documentation is a bit confusing... I think this is mostly relevant when doing multipart uploads, but since we're copying from the same bucket and just replacing the metadata, I don't think it matters. I did have to specify
content-type
manually because otherwise everything getsapplication/octet-stream
:-/This was more intuitive with
s3cmd
which had amodify
command, so hopefully this copying over doesn't incur additional costs.