Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP]: Jenkinsfile: add Testing stage #31

Closed
wants to merge 1 commit into from

Conversation

arithx
Copy link
Contributor

@arithx arithx commented Nov 28, 2018

Adds a testing stage which runs all kola tests for the latest build.
Currently does not archive any testing artifacts.

@arithx
Copy link
Contributor Author

arithx commented Nov 28, 2018

@dustymabe can you sync up with me on how to test this? I've ran the commands themselves locally inside a coreos-assembler container but want to validate it against the actual pipeline.

@dustymabe
Copy link
Member

@dustymabe can you sync up with me on how to test this? I've ran the commands themselves locally inside a coreos-assembler container but want to validate it against the actual pipeline.

@arithx one way might be to run the environment locally as described in https://github.com/coreos/fedora-coreos-pipeline/blob/a8bc0efdf1c6eba0bd75a03d434c74f473536f52/HACKING.md and see if it works there. We can also set up a container in centos ci and run it and see if it works. we should also look to get you access to that cluster.

Jenkinsfile Outdated
@@ -47,6 +47,14 @@ podTemplate(cloud: 'openshift', label: 'coreos-assembler', yaml: pod, defaultCon
currentBuild.description = "⚡ ${newBuildID}"
}

stage('Test') {
utils.shwrap("""
latest_build=$(readlink builds/latest)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to escape all the $ in this hunk.

@jlebon
Copy link
Member

jlebon commented Nov 28, 2018

Yeah, HACKING.md should be pretty good now! It does involve some steps to get oc cluster up working, but that's ideally a one time thing :)

@jlebon
Copy link
Member

jlebon commented Nov 28, 2018

(Also, thanks for this patch! 👍 )

@arithx arithx force-pushed the add_kola branch 2 times, most recently from a3cc11e to 9429fb0 Compare November 30, 2018 20:26
@dustymabe dustymabe changed the title Jenkinsfile: add Testing stage [WIP]: Jenkinsfile: add Testing stage Dec 10, 2018
@dustymabe
Copy link
Member

marking as WIP

@jlebon
Copy link
Member

jlebon commented Dec 14, 2018

This is now blocked on a workaround for coreos/mantle#956.

Adds a testing stage which will clone down a fork of mantle, build
kola/kolet, and test via the unprivileged-qemu platform. Currently does
not archive any testing artifacts.
@arithx
Copy link
Contributor Author

arithx commented Jan 31, 2019

Updated to use the fcos_ci branch of mantle.

Untested.

@jlebon
Copy link
Member

jlebon commented Mar 5, 2019

I guess if one wanted to test this today, they'd use the qemu_2_electric_boogaloo branch right?

utils.shwrap("""
latest_build=\$(readlink builds/latest)
qcow=\$(ls builds/"\${latest_build}"/*-"\${latest_build}"-qemu.qcow2)
mantle/bin/kola -p unprivileged-qemu --qemu-image "\${qemu}" -b fcos run | tee
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO something like this should be coreos-assembler kola.

stage('Test') {
// clone & build kola
utils.shwrap("""
git clone https://github.com/arithx/mantle
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah...we need to make this more convenient in cosa.

@jlebon
Copy link
Member

jlebon commented May 29, 2019

So, coreos/coreos-assembler#85 was merged now. Rebasing on top of that would be a great opportunity for someone who wants to get familiar with the pipeline and hacking on it!

At least as a first step, just running qemu-unpriv as a sanity check, even if we don't act on it. And then as a follow-up we can do the "skip remaining artifacts and archive qcow2" bits.

@jlebon jlebon added the WIP Work in progress label Jun 10, 2019
jlebon added a commit to jlebon/fedora-coreos-pipeline that referenced this pull request Jun 17, 2019
I think we should stop using `/srv` as a workdir entirely and just
always build in the workspace. The core issue here is that (1) we want
to be able to have concurrent builds, and (2) a workdir can't be easily
shared today. This also great simplifies the devel vs prod logic which
had some funky conditionals around this.

So then, how can developers without S3 creds actually *access* built
artifacts? We simply archive them as part of the build. This is in line
also with coreos#31, where we'll probably be archiving things anyway.

Finally, how *can* we use the PVC as cache in a safe way shareable
across all the streams? I see two options offhand:
1. as a local RPM mirror: add flags to `cosa fetch` (and maybe
   `rpm-ostree`) to read & write RPMs in `/srv`, hold a lock to regen
   metadata
2. as a pkgcache repo: similarly to the above, but also doing the
   import, so it's just a pkgcache repo; this would probably require
   teaching rpm-ostree about this, or `cosa fetch` could just blindly
   import every ref
jlebon added a commit to jlebon/fedora-coreos-pipeline that referenced this pull request Jun 17, 2019
I think we should stop using `/srv` as a workdir entirely and just
always build in the workspace. The core issue here is that (1) we want
to be able to have concurrent builds, and (2) a cosa workdir can't be
easily shared today. This also great simplifies the devel vs prod logic
which had some funky conditionals around this.

So then, how can developers without S3 creds actually *access* built
artifacts? We simply archive them as part of the build. This is in line
also with coreos#31, where we'll probably be archiving things anyway.

Finally, how *can* we use the PVC as cache in a safe way shareable
across all the streams? I see two options offhand:
1. as a local RPM mirror: add flags to `cosa fetch` (and maybe
   `rpm-ostree`) to read & write RPMs in `/srv`, hold a lock to regen
   metadata
2. as a pkgcache repo: similarly to the above, but also doing the
   import, so it's just a pkgcache repo; this would probably require
   teaching rpm-ostree about this, or `cosa fetch` could just blindly
   import every ref
jlebon added a commit to jlebon/fedora-coreos-pipeline that referenced this pull request Jun 17, 2019
I think we should stop using `/srv` as a workdir entirely and just
always build in the workspace. The core issue here is that (1) we want
to be able to have concurrent builds, and (2) a cosa workdir can't be
easily shared today. This also greatly simplifies the devel vs prod
logic which had some funky conditionals around this.

So then, how can developers without S3 creds actually *access* built
artifacts? We simply archive them as part of the build. This is in line
also with coreos#31, where we'll probably be archiving things anyway.

Finally, how *can* we use the PVC as cache in a safe way shareable
across all the streams? I see two options offhand:
1. as a local RPM mirror: add flags to `cosa fetch` (and maybe
   `rpm-ostree`) to read & write RPMs in `/srv`, hold a lock to regen
   metadata
2. as a pkgcache repo: similarly to the above, but also doing the
   import, so it's just a pkgcache repo; this would probably require
   teaching rpm-ostree about this, or `cosa fetch` could just blindly
   import every ref
jlebon added a commit to jlebon/fedora-coreos-pipeline that referenced this pull request Jun 17, 2019
I think we should stop using `/srv` as a workdir entirely and just
always build in the workspace. The core issue here is that (1) we want
to be able to have concurrent builds, and (2) a cosa workdir can't be
easily shared today. This also simplifies the devel vs prod logic quite
a bit since it had some funky conditionals around this.

So then, how can developers without S3 creds actually *access* built
artifacts? We simply archive them as part of the build. This is in line
also with coreos#31, where we'll probably be archiving things anyway.

Finally, how *can* we use the PVC as cache in a safe way shareable
across all the streams? I see two options offhand:
1. as a local RPM mirror: add flags to `cosa fetch` (and maybe
   `rpm-ostree`) to read & write RPMs in `/srv`, hold a lock to regen
   metadata
2. as a pkgcache repo: similarly to the above, but also doing the
   import, so it's just a pkgcache repo; this would probably require
   teaching rpm-ostree about this, or `cosa fetch` could just blindly
   import every ref
jlebon added a commit to jlebon/fedora-coreos-pipeline that referenced this pull request Jun 17, 2019
I think we should stop using `/srv` as a workdir entirely and just
always build in the workspace. The core issue here is that (1) we want
to be able to have concurrent builds, and (2) a cosa workdir can't be
easily shared today. This also simplifies the devel vs prod logic quite
a bit since it had some funky conditionals around this.

So then, how can developers without S3 creds actually *access* built
artifacts? We simply archive them as part of the build. This is in line
also with coreos#31, where we'll probably be archiving things anyway.

Finally, how *can* we use the PVC as cache in a safe way shareable
across all the streams? I see two options offhand:
1. as a local RPM mirror: add flags and logic to `cosa fetch` to read &
   write RPMs in `/srv`, hold a lock to regen metadata
2. as a pkgcache repo: similarly to the above, but also doing the
   import, so it's just a pkgcache repo; this would probably require
   teaching rpm-ostree about this, or `cosa fetch` could just blindly
   import every ref
jlebon added a commit to jlebon/fedora-coreos-pipeline that referenced this pull request Jun 18, 2019
I think we should stop using `/srv` as a workdir entirely and just
always build in the workspace. The core issue here is that (1) we want
to be able to have concurrent builds, and (2) a cosa workdir can't be
easily shared today. This also simplifies the devel vs prod logic quite
a bit since it had some funky conditionals around this.

So then, how can developers without S3 creds actually *access* built
artifacts? We simply archive them as part of the build. This is in line
also with coreos#31, where we'll probably be archiving things anyway.

Finally, how *can* we use the PVC as cache in a safe way shareable
across all the streams? I see two options offhand:
1. as a local RPM mirror: add flags and logic to `cosa fetch` to read &
   write RPMs in `/srv`, hold a lock to regen metadata
2. as a pkgcache repo: similarly to the above, but also doing the
   import, so it's just a pkgcache repo; this would probably require
   teaching rpm-ostree about this, or `cosa fetch` could just blindly
   import every ref
jlebon added a commit to jlebon/fedora-coreos-pipeline that referenced this pull request Jun 18, 2019
I think we should stop using `/srv` as a workdir entirely and just
always build in the workspace. The core issue here is that (1) we want
to be able to have concurrent builds, and (2) a cosa workdir can't be
easily shared today. This also simplifies the devel vs prod logic quite
a bit since it had some funky conditionals around this.

So then, how can developers without S3 creds actually *access* built
artifacts? We simply archive them as part of the build. This is in line
also with coreos#31, where we'll probably be archiving things anyway.

Finally, how *can* we use the PVC as cache in a safe way shareable
across all the streams? I see two options offhand:
1. as a local RPM mirror: add flags and logic to `cosa fetch` to read &
   write RPMs in `/srv`, hold a lock to regen metadata
2. as a pkgcache repo: similarly to the above, but also doing the
   import, so it's just a pkgcache repo; this would probably require
   teaching rpm-ostree about this, or `cosa fetch` could just blindly
   import every ref
jlebon added a commit to jlebon/fedora-coreos-pipeline that referenced this pull request Jun 18, 2019
I think we should stop using `/srv` as a workdir entirely and just
always build in the workspace. The core issue here is that (1) we want
to be able to have concurrent builds, and (2) a cosa workdir can't be
easily shared today. This also simplifies the devel vs prod logic quite
a bit since it had some funky conditionals around this.

So then, how can developers without S3 creds actually *access* built
artifacts? We simply archive them as part of the build. This is in line
also with coreos#31, where we'll probably be archiving things anyway.

Finally, how *can* we use the PVC as cache in a safe way shareable
across all the streams? I see two options offhand:
1. as a local RPM mirror: add flags and logic to `cosa fetch` to read &
   write RPMs in `/srv`, hold a lock to regen metadata
2. as a pkgcache repo: similarly to the above, but also doing the
   import, so it's just a pkgcache repo; this would probably require
   teaching rpm-ostree about this, or `cosa fetch` could just blindly
   import every ref
@arithx arithx closed this Jul 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
WIP Work in progress
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants