Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

arm64 support #90

Merged
merged 9 commits into from
Feb 16, 2021
Merged
Show file tree
Hide file tree
Changes from 8 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
build
.kvm-images
.installed-requirements
.installed-qemu
namibase/nami-linux-x64.tar.gz
110 changes: 91 additions & 19 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -1,22 +1,94 @@
language: bash
sudo: required
script: bash shellcheck && sudo bash buildall
dist: xenial
dist: focal
virt: vm
group: edge
os: linux
services:
- docker
before_install:
- docker version
# Fix for Ubuntu Xenial apt-daily.service triggering
# https://unix.stackexchange.com/questions/315502/how-to-disable-apt-daily-service-on-ubuntu-cloud-vm-image
- |
while sudo fuser /var/{lib/{dpkg,apt/lists},cache/apt/archives}/lock >/dev/null 2>&1; do
sleep 1
done
- sudo apt-get -qq update
- sudo apt-get install -y debian-archive-keyring debootstrap shellcheck
deploy:
provider: script
script: bash pushall
skip_cleanup: true
on:
branch: master

env:
global:
- BASENAME=bitnami/minideb
- LATEST=buster
- DISTS_WITH_SNAPSHOT="$LATEST"

.build_job: &build_job
stage: build
before_install:
- docker version
# Fix for Ubuntu Xenial apt-daily.service triggering
# https://unix.stackexchange.com/questions/315502/how-to-disable-apt-daily-service-on-ubuntu-cloud-vm-image
- |
while sudo fuser /var/{lib/{dpkg,apt/lists},cache/apt/archives}/lock >/dev/null 2>&1; do
sleep 1
done
- sudo rm -f /usr/local/bin/jq
Copy link

@juamedgod juamedgod Feb 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this needed? (deleting jq)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The .installed-requirements make target fails without removing the travis-included jq executable

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Uhm, which error was it throwing? If the package was already installed it should simply ignore it.

Copy link
Contributor Author

@alekitto alekitto Feb 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't remember the exact error message, but the jq executable is not installed in travis machine via apt.
Apt simply refuses to overwrite an existing file.

install:
- sudo make .installed-requirements
script:
- sudo bash buildone $DIST $PLATFORM
- 'if [[ "$TRAVIS_BRANCH" == "master" && "$DISTS_WITH_SNAPSHOT" =~ (^|[[:space:]])"$DIST"($|[[:space:]]) ]] ; then sudo bash buildone_snapshot $DIST "$(./snapshot_id)" $PLATFORM ; fi'
after_success:
- 'if [[ "$TRAVIS_BRANCH" == "master" && "$LATEST" == "$DIST" ]] ; then sudo docker tag "$BASENAME:$DIST-$PLATFORM" "$BASENAME:latest-$PLATFORM" ; fi'
- 'if [[ "$TRAVIS_BRANCH" == "master" ]] ; then sudo bash pushone $DIST $PLATFORM ; fi'
- 'if [[ "$TRAVIS_BRANCH" == "master" && "$DISTS_WITH_SNAPSHOT" =~ (^|[[:space:]])"$DIST"($|[[:space:]]) ]] ; then sudo bash pushone "$DIST-snapshot-$(./snapshot_id)" $PLATFORM ; fi'
- 'if [[ "$TRAVIS_BRANCH" == "master" && "$LATEST" == "$DIST" ]] ; then sudo bash pushone latest $PLATFORM ; fi'

jobs:
include:
- stage: shellcheck
install:
- sudo apt-get -qq update
- sudo apt-get install -y shellcheck
script: bash shellcheck
- <<: *build_job
arch: amd64
env:
- DIST=jessie PLATFORM=amd64
- <<: *build_job
arch: amd64
env:
- DIST=stretch PLATFORM=amd64
- <<: *build_job
arch: amd64
env:
- DIST=buster PLATFORM=amd64
- <<: *build_job
arch: arm64-graviton2
env:
- DIST=stretch PLATFORM=arm64
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, I think we need to use arm64since arm64-graviton2 is only supported in .com our project is built in .org

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Didn't notice that build was running on .org. Fixed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Anyway on .org now there's a warning stating that will be shut down in a few weeks. If the build will be migrated to .com maybe we should revert to graviton2 which should have better performance

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There could be a problem with arm64 architecture on travis-ci.org: it could run only in unpriviledged LXD container.
To correctly run debootstrap a full vm is needed, which is only available in arm64-graviton2.
I also tried to execute a QEMU build into a full amd64 vm, but it is extremely slow.

Stating that .org is about to shut down, maybe a migration to .com could be planned?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The third option is to try to build the images on GitHub Actions.
It shouldn't be too difficult to make a test workflow. The question is: will the execution be too slow?

Copy link

@ddelange ddelange Jan 26, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The question is: will the execution be too slow?

where travis runs stages sequentially on the same machine (multiple machines also doable), github runs jobs in parallel on multiple machines by default. this should solve any speed issues probably?

docker provides a qemu setup action for multiarch GH workflows.

Im I'm not mistaken, GH by default runs up to four jobs on four runners (each runner 2CPU/7GB) concurrently (might be four not sure anymore) which can be decreased as well. all you'd need to do is define a job with a step that runs the qemu_build script with args ${{matrix.distro}} ${{matrix.other_arg}}, then define your custom matrix on the top of the job and you should be all set for parallel builds :)

for example, this CI runs in parallel on new commits to master, and all commits to all PRs (and emulates Travis's auto-cancel previous runs feature) , plus CD runs once upon a GitHub Release or Prerelease. as alternative to split files, add a push step to the job that only runs on a release/prerelease event:

on:
  pull_request:
  push:
    branches: master
  release:
    types: [released, prereleased]
  workflow_dispatch:  # allows triggering manually from the Actions tab

... 

  - name: Push image
    if: github.event_name == 'release'
    run: docker push

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the records: I just tried to build the images on GH Actions on my fork (master branch to test the push job), without using qemu_build as performance was too bad.
This is the last working run.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

14 mins for buster arm64..

+1 for the effort though! plz dont delete the branch :)

Copy link

@ddelange ddelange Jan 28, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fyi I bumped the build job to ubuntu-20.04 in the hope that this would run on newer github runners than the 18.04 job.

the result is about twice as fast (similar stages timings as compared to graviton2 555918d)
image
image

- <<: *build_job
arch: arm64-graviton2
env:
- DIST=buster PLATFORM=arm64
- stage: deploy
if: branch = master AND type = push
env:
- DISTS="stretch buster latest"
dani8art marked this conversation as resolved.
Show resolved Hide resolved

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As jessie does not get a multiplatform manifest and we are tagging it as jessie-amd64, we should add some extra logic to tag it as simply jessie too, for backwards compatibility.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, I've added a build job for jessie image push.
Probably there's something to be corrected in update_minideb_derived to check multiple digests, but is out of scope of this PR.

before_install: mkdir $HOME/.docker
install: 'echo "{ \"experimental\": \"enabled\" }" > $HOME/.docker/config.json'
script:
- |
if [ -n "${DOCKER_PASSWORD:-}" ]; then
docker login -u "$DOCKER_USERNAME" -p "$DOCKER_PASSWORD"
fi

# Create and merge a PR to update minideb-extras
CIRCLE_CI_FUNCTIONS_URL=${CIRCLE_CI_FUNCTIONS_URL:-https://raw.githubusercontent.com/bitnami/test-infra/master/circle/functions}
source <(curl -sSL "$CIRCLE_CI_FUNCTIONS_URL")
for DIST in $DISTS ; do
sudo docker manifest create $BASENAME:$DIST $BASENAME:$DIST-amd64 $BASENAME:$DIST-arm64
sudo docker manifest push $BASENAME:$DIST
sudo docker pull $BASENAME:$DIST

if [[ "$DISTS_WITH_SNAPSHOT" =~ (^|[[:space:]])"$DIST"($|[[:space:]]) ]] ; then
SNAPSHOT_NAME="$DIST-snapshot-$(./snapshot_id)"
sudo docker manifest create $SNAPSHOT_NAME:$DIST $SNAPSHOT_NAME:$DIST-amd64 $SNAPSHOT_NAME:$DIST-arm64

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you meant $BASENAME:$SNAPSHOT_NAME (and the -amd64, -arm64 variants)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

sudo docker manifest push $SNAPSHOT_NAME:$DIST
sudo docker pull $SNAPSHOT_NAME:$DIST
fi

# Use '.RepoDigests 0' for getting Dockerhub repo digest as it was the first pushed
DIST_REPO_DIGEST=$(docker image inspect --format '{{index .RepoDigests 0}}' "$BASENAME:${DIST}")
juamedgod marked this conversation as resolved.
Show resolved Hide resolved
update_minideb_derived "https://github.com/$BASENAME-runtimes" "$DIST" "$DIST_REPO_DIGEST"
done
5 changes: 5 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,11 @@ clean:
clobber: clean
@${RM} .installed-requirements

.installed-qemu:
@echo "Installing QEMU and required packages..."
@./install-qemu.sh
@touch $@

.installed-requirements:
@echo "Installing required packages..."
@./pre-build.sh
Expand Down
20 changes: 19 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ We provide a Makefile to help you build Minideb locally. It should be run on a D
$ sudo make
```

To build an individual release (stretch, jessie or unstable)
To build an individual release (stretch, buster or unstable)
dani8art marked this conversation as resolved.
Show resolved Hide resolved
```
$ sudo make stretch
```
Expand All @@ -63,6 +63,24 @@ To test the resulting image:
$ sudo make test-stretch
```

## Building Minideb for foreign architecture
Make commands shown above will build an image for the architecture you are currently working on.
To build an image for a foreign architecture (for example to build a multiarch image), we provide a
simple script which run a QEMU instance for the target architecture and build the image inside it.

To build and test a buster image for arm64:
```
$ ./qemu_build buster arm64
```

The image will be then imported locally through the docker cli with `$distribution-$architecture` tag
(example: `bitnami/minideb:buster-arm64`)

Current limitations of `qemu_build` script:

- Can be run only on debian-based distributions
- Support `AMD64` and `ARM64` target architectures only

# Contributing
We'd love for you to contribute to this image. You can request new features by creating an [issue](https://github.com/bitnami/minideb/issues), or submit a [pull request](https://github.com/bitnami/minideb/pulls) with your contribution.

Expand Down
20 changes: 15 additions & 5 deletions buildall
Original file line number Diff line number Diff line change
Expand Up @@ -6,22 +6,32 @@ set -e
set -u
set -o pipefail

arch=${1:-"amd64 arm64"}

dist="jessie
stretch
buster
"
dist_with_snapshot="buster"

for i in $dist; do
./buildone "$i"
for a in $arch; do
for i in $dist; do
if [[ "$a" != "amd64" && "$i" == "jessie" ]]; then
continue
fi

./buildone "$i" "$a"
done
done

snapshot_id=$(./snapshot_id)
if [ -n "$snapshot_id" ]; then
for a in $arch; do
for i in $dist_with_snapshot; do
./buildone_snapshot "$i" "$snapshot_id"
./buildone_snapshot "$i" "$snapshot_id" "$a"
done

mkdir -p build
echo "$snapshot_id" > build/snapshot_id
mkdir -p "build/$a"
echo "$snapshot_id" > "build/$a/snapshot_id"
done
fi
15 changes: 8 additions & 7 deletions buildone
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,9 @@ log() {

build() {
DIST=$1
PLATFORM=${2:-amd64}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe we should consider the $PLATFORM name when querying the registry in:

if docker pull "$BASENAME:$TAG" > /dev/null; then
   ...
}

If not, when comparing the amd64 image (at least now that we only have a amd64 one, but is not labeled -amd64), the check at then end is probably going to determine the image is up to date:

  else
            log "Image didn't change"
            return
        fi

And so it will skip tagging it:

docker tag "$built_image_id" "$BASENAME:$TAG-$PLATFORM"

This may not be a problem when all is up and running, but at least for the first time, it will fail because the tag does not exists when creating the manifest.

Maybe just appending it at the beginning would work:

if [ -n "$debian_snapshot_id" ]; then
        TAG="${DIST}-snapshot-${debian_snapshot_id}-$PLATFORM"
    else
        TAG=$DIST-$PLATFORM
    fi

That would also require tweaking the "test" script to check the image name correspond to jessie. From:

if [ "jessie" == "$DIST" ]; then

to something like

if [[ "$DIST" == "jessie"* ]]; then

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed


debian_snapshot_id=${2:-}
debian_snapshot_id=${3:-}
if [ -n "$debian_snapshot_id" ]; then
TAG="${DIST}-snapshot-${debian_snapshot_id}"
else
Expand All @@ -64,7 +65,7 @@ build() {
log "Building $BASENAME:$TAG"
log "============================================"
./mkimage "build/$TAG.tar" "$DIST" "${debian_snapshot_id:-}"
built_image_id=$(./import "build/$TAG.tar" "$target_ts")
built_image_id=$(./import "build/$TAG.tar" "$target_ts" "$PLATFORM")
log "============================================"
log "Running tests for $BASENAME:$TAG"
log "============================================"
Expand All @@ -73,7 +74,7 @@ build() {
log "Rebuilding $BASENAME:$TAG to test reproducibility"
log "============================================"
./mkimage "build/${TAG}-repro.tar" "$DIST" "${debian_snapshot_id:-}"
repro_image_id=$(./import "build/${TAG}-repro.tar" "$target_ts")
repro_image_id=$(./import "build/${TAG}-repro.tar" "$target_ts" "$PLATFORM")
if [ "$repro_image_id" != "$built_image_id" ]; then
log "$BASENAME:$TAG differs after a rebuild. Examine $built_image_id and $repro_image_id"
log "to find the differences and fix the build to be reproducible again."
Expand All @@ -89,19 +90,19 @@ build() {
./dockerdiff "$pulled_image_id" "$built_image_id" || true
# Re-import with the current timestamp so that the image shows
# as new
built_image_id="$(./import "build/$TAG.tar" "$current_ts")"
built_image_id="$(./import "build/$TAG.tar" "$current_ts" "$PLATFORM")"
else
log "Image didn't change"
return
fi
fi
docker tag "$built_image_id" "$BASENAME:$TAG"
log "Tagged $built_image_id as $BASENAME:$TAG"
docker tag "$built_image_id" "$BASENAME:$TAG-$PLATFORM"
log "Tagged $built_image_id as $BASENAME:$TAG-$PLATFORM"
Comment on lines +99 to +100
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: is there some special handling in registries that means that this is understood to be a multiarch image? If not it seems like this means we are no longer going to update the existing tags?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The existing tags will be updated by the docker manifest push command as a multiarch tag, but the source images of the tag needs to be pushed before the docker manifest command could be issued.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, thanks.

}

if [ -z "$1" ]; then
echo "You must specify the dist to build"
exit 1
fi

build "${1}" "${2:-}"
build "$@"
3 changes: 2 additions & 1 deletion buildone_snapshot
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,6 @@ set -o pipefail

dist=${1:?dist arg is required}
snapshot_id=${2:-$(./snapshot_id)}
platform=${3:-amd64}

./buildone "$dist" "$snapshot_id"
./buildone "$dist" "$platform" "$snapshot_id"
9 changes: 5 additions & 4 deletions import
Original file line number Diff line number Diff line change
Expand Up @@ -12,19 +12,20 @@ set -e
set -u
set -o pipefail

CONF_TEMPLATE='{"architecture":"amd64","comment":"from Bitnami with love","config":{"Hostname":"","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":["/bin/bash"],"Image":"","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"container_config":{"Hostname":"","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":null,"Image":"","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"created":"%TIMESTAMP%","docker_version":"1.13.0","history":[{"created":"%TIMESTAMP%","comment":"from Bitnami with love"}],"os":"linux","rootfs":{"type":"layers","diff_ids":["sha256:%LAYERSUM%"]}}'
MANIFEST_TEMPLATE='[{"Config":"%CONF_SHA%.json","RepoTags":null,"Layers":["%LAYERSUM%/layer.tar"]}]'

SOURCE=${1:?Specify the tarball to import}
TIMESTAMP=${2:?Specify the timestamp to use}
PLATFORM=${3:?Specify the target platform}

CONF_TEMPLATE='{"architecture":"%PLATFORM%","comment":"from Bitnami with love","config":{"Hostname":"","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":["/bin/bash"],"Image":"","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"container_config":{"Hostname":"","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":null,"Image":"","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"created":"%TIMESTAMP%","docker_version":"1.13.0","history":[{"created":"%TIMESTAMP%","comment":"from Bitnami with love"}],"os":"linux","rootfs":{"type":"layers","diff_ids":["sha256:%LAYERSUM%"]}}'
MANIFEST_TEMPLATE='[{"Config":"%CONF_SHA%.json","RepoTags":null,"Layers":["%LAYERSUM%/layer.tar"]}]'

import() {
local TDIR="$(mktemp -d)"
local LAYERSUM="$(sha256sum $SOURCE | awk '{print $1}')"
mkdir $TDIR/$LAYERSUM
cp $SOURCE $TDIR/$LAYERSUM/layer.tar
echo -n '1.0' > $TDIR/$LAYERSUM/VERSION
local CONF="$(echo -n "$CONF_TEMPLATE" | sed -e "s/%TIMESTAMP%/$TIMESTAMP/g" -e "s/%LAYERSUM%/$LAYERSUM/g")"
local CONF="$(echo -n "$CONF_TEMPLATE" | sed -e "s/%PLATFORM%/$PLATFORM/g" -e "s/%TIMESTAMP%/$TIMESTAMP/g" -e "s/%LAYERSUM%/$LAYERSUM/g")"
local CONF_SHA="$(echo -n "$CONF" | sha256sum | awk '{print $1}')"
echo -n "$CONF" > "$TDIR/${CONF_SHA}.json"
local MANIFEST="$(echo -n "$MANIFEST_TEMPLATE" | sed -e "s/%CONF_SHA%/$CONF_SHA/g" -e "s/%LAYERSUM%/$LAYERSUM/g")"
Expand Down
19 changes: 19 additions & 0 deletions install-qemu.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
#!/bin/bash

set -eu

do_sudo() {
if [[ "0" == "$(id --user)" ]]; then
"$@"
else
sudo "$@"
fi
}

while do_sudo fuser /var/{lib/{dpkg,apt/lists},cache/apt/archives}/lock >/dev/null 2>&1; do
sleep 1
done

do_sudo apt-get update
do_sudo apt-get install -y qemu-kvm libvirt-bin qemu-utils genisoimage virtinst curl rsync qemu-system-x86 qemu-system-arm cloud-image-utils

46 changes: 46 additions & 0 deletions pushone
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
#!/bin/bash

set -e
set -u
set -o pipefail

DIST=${1:?Specify the distrubution name}
PLATFORM=${2:-amd64}

BASENAME=bitnami/minideb
GCR_BASENAME=gcr.io/bitnami-containers/minideb
QUAY_BASENAME=quay.io/bitnami/minideb

if [ -n "${DOCKER_PASSWORD:-}" ]; then
docker login -u "$DOCKER_USERNAME" -p "$DOCKER_PASSWORD"
fi

if [ -n "${QUAY_PASSWORD:-}" ]; then
docker login -u "$QUAY_USERNAME" -p "$QUAY_PASSWORD" quay.io
fi

if [ -n "${GCR_KEY:-}" ]; then
gcloud auth activate-service-account "$GCR_EMAIL" --key-file <(echo "$GCR_KEY")
fi

ENABLE_DOCKER_CONTENT_TRUST=0
if [ -n "${DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE:-}" ] && [ -n "${DOCKER_CONTENT_TRUST_REPOSITORY_KEY:-}" ]; then
tmpdir=$(mktemp -d)
(cd "${tmpdir}" && bash -c 'echo -n "${DOCKER_CONTENT_TRUST_REPOSITORY_KEY}" | base64 -d > key')
chmod 400 "${tmpdir}/key"
docker trust key load "${tmpdir}/key"
rm -rf "${tmpdir}"
export ENABLE_DOCKER_CONTENT_TRUST=1
fi

push() {
local dist="$1"
DOCKER_CONTENT_TRUST=${ENABLE_DOCKER_CONTENT_TRUST} docker push "${BASENAME}:${dist}"
docker push "${QUAY_BASENAME}:${dist}"
gcloud docker -- push "${GCR_BASENAME}:${dist}"
}

docker tag "${BASENAME}:${DIST}" "${QUAY_BASENAME}:${DIST}-${PLATFORM}"
docker tag "${BASENAME}:${DIST}" "${GCR_BASENAME}:${DIST}-${PLATFORM}"
push "$DIST-${PLATFORM}"

Loading