Skip to content

Commit

Permalink
pipeline: stop using /srv as workdir
Browse files Browse the repository at this point in the history
I think we should stop using `/srv` as a workdir entirely and just
always build in the workspace. The core issue here is that (1) we want
to be able to have concurrent builds, and (2) a cosa workdir can't be
easily shared today. This also simplifies the devel vs prod logic quite
a bit since it had some funky conditionals around this.

So then, how can developers without S3 creds actually *access* built
artifacts? We simply archive them as part of the build. This is in line
also with coreos#31, where we'll probably be archiving things anyway.

Finally, how *can* we use the PVC as cache in a safe way shareable
across all the streams? I see two options offhand:
1. as a local RPM mirror: add flags and logic to `cosa fetch` to read &
   write RPMs in `/srv`, hold a lock to regen metadata
2. as a pkgcache repo: similarly to the above, but also doing the
   import, so it's just a pkgcache repo; this would probably require
   teaching rpm-ostree about this, or `cosa fetch` could just blindly
   import every ref
  • Loading branch information
jlebon committed Jun 18, 2019
1 parent e50e678 commit fdedf82
Show file tree
Hide file tree
Showing 3 changed files with 23 additions and 36 deletions.
2 changes: 1 addition & 1 deletion HACKING.md
Original file line number Diff line number Diff line change
Expand Up @@ -238,7 +238,7 @@ This template creates:
1. the Jenkins master imagestream,
2. the Jenkins slave imagestream,
3. the coreos-assembler imagestream,
4. the `PersistentVolumeClaim` in which we'll cache and compose, and
4. the `PersistentVolumeClaim` in which we'll cache, and
5. the Jenkins pipeline build.

The default size of the PVC is 100Gi. There is a `PVC_SIZE`
Expand Down
52 changes: 22 additions & 30 deletions Jenkinsfile
Original file line number Diff line number Diff line change
Expand Up @@ -55,15 +55,6 @@ if (prod) {
podTemplate(cloud: 'openshift', label: 'coreos-assembler', yaml: pod, defaultContainer: 'jnlp') {
node('coreos-assembler') { container('coreos-assembler') {

// Only use the PVC for prod caching. For devel pipelines, we just
// always refetch from scratch: we don't want to allocate cached data
// for pipelines which may only run once.
if (prod) {
utils.workdir = "/srv"
} else {
utils.workdir = env.WORKSPACE
}

// this is defined IFF we *should* and we *can* upload to S3
def s3_builddir

Expand All @@ -81,28 +72,25 @@ podTemplate(cloud: 'openshift', label: 'coreos-assembler', yaml: pod, defaultCon
}
}

// Special case for devel pipelines not running in our project and not
// uploading to S3; in that case, the only way to make the builds
// accessible at all is to have them in the PVC.
if (!prod && !prod_jenkins && !s3_builddir) {
utils.workdir = "/srv"
}

stage('Init') {

def ref = params.STREAM
if (src_config_ref != "") {
assert !prod : "Asked to override ref in prod mode"
ref = src_config_ref
}

utils.shwrap("""
# just always restart from scratch in case it's a devel pipeline
# and it changed source url or ref; this info also makes it into
# the build metadata through cosa reading the origin remote
rm -rf src/config
def cache_img
if (prod) {
cache_img = "/srv/prod/${params.STREAM}/cache.qcow2"
} else {
cache_img = "/srv/devel/${devel_prefix}/cache.qcow2"
}

# in the future, the stream will dictate the branch in the prod path
utils.shwrap("""
coreos-assembler init --force --branch ${ref} ${src_config_url}
mkdir -p \$(dirname ${cache_img})
ln -s ${cache_img} cache/cache.qcow2
""")
}

Expand Down Expand Up @@ -164,17 +152,12 @@ podTemplate(cloud: 'openshift', label: 'coreos-assembler', yaml: pod, defaultCon
}

stage('Prune Cache') {
utils.shwrap("""
coreos-assembler prune --keep=1
""")

// If the cache img is larger than e.g. 8G, then nuke it. Otherwise
// it'll just keep growing and we'll hit ENOSPC.
// it'll just keep growing and we'll hit ENOSPC. Use realpath since
// the cache can actually be located on the PVC.
utils.shwrap("""
if [ \$(du cache/cache.qcow2 | cut -f1) -gt \$((1024*1024*8)) ]; then
rm -vf cache/cache.qcow2
qemu-img create -f qcow2 cache/cache.qcow2 10G
LIBGUESTFS_BACKEND=direct virt-format --filesystem=xfs -a cache/cache.qcow2
rm -vf \$(realpath cache/cache.qcow2)
fi
""")
}
Expand All @@ -192,6 +175,15 @@ podTemplate(cloud: 'openshift', label: 'coreos-assembler', yaml: pod, defaultCon
utils.shwrap("""
coreos-assembler buildupload s3 --acl=public-read ${s3_builddir}
""")
} else if (!prod) {
// In devel mode without an S3 server, just archive into the PVC
// itself. Otherwise there'd be no other way to retrieve the
// artifacts. But note we only keep one build at a time.
utils.shwrap("""
rm -rf /srv/devel/${devel_prefix}/build
mkdir -p /srv/devel/${devel_prefix}/build
cp -a \$(realpath builds/latest) /srv/devel/${devel_prefix}/build
""")
}

// XXX: For now, we keep uploading the latest build to the artifact
Expand Down
5 changes: 0 additions & 5 deletions utils.groovy
Original file line number Diff line number Diff line change
@@ -1,25 +1,20 @@
workdir = env.WORKSPACE

def shwrap(cmds) {
sh """
set -xeuo pipefail
cd ${workdir}
${cmds}
"""
}

def shwrap_capture(cmds) {
return sh(returnStdout: true, script: """
set -euo pipefail
cd ${workdir}
${cmds}
""").trim()
}

def shwrap_rc(cmds) {
return sh(returnStatus: true, script: """
set -euo pipefail
cd ${workdir}
${cmds}
""")
}
Expand Down

0 comments on commit fdedf82

Please sign in to comment.