Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace CircleCI/AppVeyor with GitHub Actions #1640

Merged
merged 7 commits into from
Oct 12, 2020
Merged

Conversation

imiric
Copy link
Contributor

@imiric imiric commented Sep 23, 2020

This replaces the CircleCI and AppVeyor setups with GitHub Actions.

It also creates a GH release if a version tag is pushed using the matching .md file in the release notes directory. You can see how it looks in this test release.

Pending issues:

  • Unable to use the official golangci-lint action, because there's no way to specify the --new-from-rev commit we've been using so far to ignore the hundreds of past linter issues. The action exposes an only-new-issues option, but even with it enabled it was picking up a lot of issues.
    We should eventually fix all of these of course, but some cases are not easily fixable, so this might take time. A drawback of not using the official action is that we won't have nice inline errors in the PR, and instead will have to check the lint job's output, but this is not a huge problem in practice.

  • Test flakiness on macOS, so we can't enable those builds by default.

Partially addresses #959 and #1247, closes #1476 ?

@imiric imiric requested a review from mstoykov September 23, 2020 10:52
.github/workflows/all.yml Outdated Show resolved Hide resolved
.github/workflows/all.yml Outdated Show resolved Hide resolved
Copy link
Contributor

@mstoykov mstoykov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why use fpm instead of what we already used to use?
I would even argue this should stay in the github actions at least for now - no need to change that part, it has been working okay :D

As a whole I don't think we want anything else but just moving to github actions without any new things, we can add after that.

Also can you reeanble the old CI so we can have both for ... a couple of weeks ;)

Comment on lines +112 to +206
docker run $DOCKER_IMAGE_ID version
docker run $DOCKER_IMAGE_ID --help
docker run $DOCKER_IMAGE_ID help
docker run $DOCKER_IMAGE_ID run --help
docker run $DOCKER_IMAGE_ID inspect --help
docker run $DOCKER_IMAGE_ID status --help
docker run $DOCKER_IMAGE_ID stats --help
docker run $DOCKER_IMAGE_ID scale --help
docker run $DOCKER_IMAGE_ID pause --help
docker run $DOCKER_IMAGE_ID resume --help
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we do that

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess to test that all commands work? 🤷‍♂️ I don't mind it.

.github/workflows/all.yml Outdated Show resolved Hide resolved
@imiric
Copy link
Contributor Author

imiric commented Sep 24, 2020

Why use fpm instead of what we already used to use?

A few reasons:

  • The go-bin-deb and go-bin-rpm projects haven't been updated in 3 years. This normally shouldn't be a problem as the package formats don't change often, but it is a problem when dealing with Go.
  • We've already had issues installing the tools, and more recently installing the Glide package manager they use, which is now also abandoned. They would probably install fine as Go modules, but I don't think we should bother.
  • fpm is decently maintained, has a much larger user base and seems much more stable. The Ruby kind of sucks, but it installs fine on Ubuntu. It also supports more package formats which could be useful, and it has a nicer UI that doesn't require a JSON file, so we could remove some files from packaging/.

imiric pushed a commit to imiric/k6 that referenced this pull request Sep 24, 2020
@imiric
Copy link
Contributor Author

imiric commented Sep 28, 2020

I added Windows packaging in the latest push and cleaned things up a bit, so this is mostly done now. I still need to fix that golangci-lint issue, try to make http.TestErrorCodes pass, and possibly add a job that runs brew bump-formula-pr now that we have Mac support. Since we haven't run tests on macOS even before this, updating the Homebrew package should be safe to automate, but ideally we should look into the flakiness of macOS tests on GHA and enable those too.

Uploading to Bintray is currently commented out as I'm not sure how to test it. Let's discuss that over Slack.

You can see a full release run (minus Bintray) here.

imiric pushed a commit to imiric/k6 that referenced this pull request Sep 29, 2020
@imiric
Copy link
Contributor Author

imiric commented Sep 29, 2020

I added a job to run brew in 1d9cf2e. It's untested of course, but I confirmed that brew is installed and that the passed arguments are correct, so it should work™️.

Pushing to codecov also seems to work, but I'm not sure what makes it make a PR comment with the coverage report.

The Bintray secrets are now in the repo and the commands should work, but as discussed over Slack, it's difficult to test the jobs as we'd need to create some test package repositories, and I don't want to mess a lot with Bintray, given it's a black box and their support is not great if anything goes wrong. So we could test this with an actual release?

I think the only thing left is the suggestion to keep CircleCI and AppVeyor enabled to run together with GH Actions for a while. I'm not sure if this is a good idea, as stuff like the coverage would be uploaded twice, though I guess we could disable it in CircleCI. @mstoykov any thoughts if this is required?

Comment on lines 65 to 67
defaults:
run:
shell: bash
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be moved to a global defaults

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I guess it could be overriden for Windows jobs.

- name: Install Go
uses: actions/setup-go@v2
with:
go-version: 1.14.x
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am wondering if we can define the two versions ... in a variable thing above and just use the variable here so we have
current_version
previous_version
and maybe release_version

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that would be good, but it doesn't seem to work with the env context :( See https://github.com/imiric/k6/actions/runs/280602334 . Apparently this only works for each step's env, and not with matrix or anything else, so it's quite limited. See this discussion.

There doesn't seem to be something equivalent to variables as in Azure Pipelines, so I think we're stuck with repeating the version everywhere. :-/

cd $(mktemp -d)
go get github.com/golangci/golangci-lint/cmd/golangci-lint@$GOLANGCI_VERSION
export PATH="$HOME/go/bin:$PATH"
popd
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: this could've been done with cd - ... I think that is fairly standard?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, but I'll actually rewrite the installation as a separate step with a custom working-directory to avoid this.

Comment on lines 101 to 146
configure:
runs-on: ubuntu-latest
if: startsWith(github.ref, 'refs/tags/v')
defaults:
run:
shell: bash
outputs:
version: ${{ steps.get_version.outputs.version }}
steps:
- name: Get version
id: get_version
run: |
VERSION="${GITHUB_REF##*/}"
echo "VERSION=${VERSION}"
echo "::set-output name=version::${VERSION}"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain to me what this does ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It determines the version used globally, which can then be referenced by all jobs so that they can avoid having to figure out which version they're working with. It also defines a central point for similar configuration we might need globally.

Comment on lines +190 to +222
# We also want to tag the latest stable version as latest
if [[ "$VERSION" != "master" ]] && [[ ! "$VERSION" =~ (RC|rc) ]]; then
docker tag "$DOCKER_IMAGE_ID" "$DOCKER_IMAGE_ID:latest"
docker push "$DOCKER_IMAGE_ID:latest"
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we actually want to have master tag and latest to only be a tag release see https://github.com/loadimpact/k6/pull/1640/files#diff-1d37e48f9ceff6d8030570cd36286a61L124-L131

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't this accomplish the same thing? $VERSION could be master, which will tag and push it by default, and if it's not (i.e. it's a version tag that's not an RC), it will also tag and push it as latest. I haven't tested and confirmed this yet, but it seems like it will work as you expect it.

I'll let you know once this WIP PR is merged in my fork, or we can wait until this PR is merged instead.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually not because this is not called unless there is a tag starting with v as it depends on configure so actually we will never push master tagged docker image ... I think

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah true, the configure dependency prevents it. It's trivial to fix if we allow configure to run on all commits and not just version tags, which seems safe to do. Let me know if you prefer to do this now, otherwise we can fix it later, maybe when we work on #1622.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am fine with it being later as I don't think docker master is all that used, it is more important that latest isn't updated unless we actually release, which seems to be the case :)
cc @na--

- name: Checkout code
uses: actions/checkout@v2
- name: Install pandoc
uses: crazy-max/ghaction-chocolatey@v1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer if this is also with the commit ... given the security problems otherwise :(. (I also would prefer if that wasn't required but 🤷‍♂️ )

Copy link
Contributor Author

@imiric imiric Sep 30, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, fixed in fdf5aeb.

But I don't think we should be concerned about malicious tag changes, that rarely happens. As long as we use a more specific version (so v1.3.0 instead of v1) we shouldn't have any build surprises.

And the drawback of using SHAs is their unfriendliness to humans 😆, as we can't easily tell what a version bump means, unlike with tags and semver.

(The reason I used it for actions/setup-homebrew is because they don't use tags...)

@@ -1,87 +1,102 @@
#!/usr/bin/env bash
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not certain if that wasn't because it broke in some environments 🤔 ... what is the upside of not using it ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The standard Bash location on most Linux systems is /bin/bash. It's safer to not rely on env if you don't need it, since it avoids the uncertainty of which binary will be used. Otherwise it would run with any bash executable in PATH, which could not only be incompatible versions but also malicious interpreters. This is unlikely of course, but since this script should only be run in a controlled CI environment, we don't need the cross-platform benefit of using env.

@@ -45,12 +45,12 @@ type bridgeTestMethodsType struct{}

func (bridgeTestMethodsType) ExportedFn() {}

//lint:ignore U1000 needed for the actual test to check that it won't be seen
// nolint: unused
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apperantly there should be no spaces by golang convention(last line) - this still works but maybe better if we keep to the convention, which I think we do in most other places

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's good to know, but I've seen a few PRs of ours that actually introduce the space 😆 But OK, will remove it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I reverted the linter fixes entirely in 7708840. It seems these changes were leftover from when I was trying to get the official golangci-lint GH Action to work, and are not needed anymore now that we're installing it manually and passing --new-from-rev as we do on CircleCI.

build-release.sh Outdated
Comment on lines 43 to 47
# The go-bin-* tools expect the binary in /tmp/
[ ! -r /tmp/k6 ] && cp "dist/${NAME}/k6" /tmp/k6
"go-bin-${FMT}" generate --file "packaging/${FMT}.json" -a amd64 \
--version "${VERSION#v}" -o "dist/k6-${VERSION}-amd64.${FMT}"
;;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not certain about that being here (especially without any checks if this will pass). Previously we only needed go to build the binaries for all OSes ... now this will break half way through and also we will upload the deb and rpm files in the release?

I am not saying that the deb and rpm shouldn't be in the release ... but it seems not all that useful, while being able to build the binaries for everything only running one command was great ;)

Maybe ... Have a second shell script for building the rpm/debs? And it can error out if dist/${NAME}/k6 doesn't exist and tell you to run the other one ? or just build it on its own. I am not certain

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's already a second shell script for building the rpm/deb: gen-packages.sh. The reason I moved it here is because this script already packages the binaries in .zip and .tar.gz, so it makes sense to keep this contained. I'd also like to get rid of the packaging/ directory entirely, though now that we're back to using the Go tools I guess that won't be happening, and there are also some Windows package assets there that we can't remove.

now this will break half way through

Why would it break half way through? We already rely on this directory structure for the other packages and there's no error checking there... If anything, I missed using $OUT_DIR instead of dist in the cp command here, so I'll change that.

As for uploading the deb/rpm to the GitHub release, yeah, that might not be needed. I'd argue it would still be useful for anyone who prefers not using the Bintray repositories, and there's no harm in giving more options. But if you feel strongly about it I could filter them out when creating the GH release.

uses: actions/upload-artifact@v2
with:
name: binaries
path: dist/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This currently means that if you want one of the files you need to download the whole archive with all binaries to get one ... :(

My tries at making this better ... let me to this being the only way ... :( Luckily you only need to write all that code once :D

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is unfortunate, but I don't mind getting the whole archive with all binaries. It takes a few seconds anyway, and I much prefer that over cluttering the workflow with each of the 8(!) artifacts that we need (we still need to upload the rpm/deb here even if they're not in the GH release).

Copy link
Contributor

@mstoykov mstoykov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mostly want for them to run in parallel in order to see if something breaks, although arguably with tests this won't be very interesting, so we can probably skip it.

Are all the lint fixes for things that weren't caught in circleci or ?
Either way, I would prefer if we can move most of those ... not CI changes to a different PR and merge that first and rebase this on top ;). But this can be done later on as well or skipped entirely given the kind of changes

imiric pushed a commit to imiric/k6 that referenced this pull request Sep 30, 2020
imiric pushed a commit to imiric/k6 that referenced this pull request Sep 30, 2020
imiric pushed a commit to imiric/k6 that referenced this pull request Sep 30, 2020
imiric pushed a commit to imiric/k6 that referenced this pull request Sep 30, 2020
@imiric
Copy link
Contributor Author

imiric commented Sep 30, 2020

I removed the lint fixes in 7708840. They're not needed so we can leave them for later.

Keeping CircleCI/AppVeyor enabled side-by-side with GHA for a while would be safer, but it would also confuse external contributors and increase the chances for the build to fail, given our flaky test situation... So I'm not sure we need it. Let's wait for Ned's opinion next week, and for him to review this as well.

.github/workflows/all.yml Outdated Show resolved Hide resolved
imiric pushed a commit to imiric/k6 that referenced this pull request Oct 1, 2020
@na--
Copy link
Member

na-- commented Oct 9, 2020

I do really think that the ergonomics of not having to download a zip with all the executables to just get yours outvalues the fact that someone will need to once copy paste the same code 8 times and change a few symbols in order for the different files to be uploaded separately

@mstoykov, I think you are mistaken and artifacts would be individually uploaded in the GitHub release: https://github.com/loadimpact/k6/blob/aa97043e1fb6ea1d76ed359827623b9f1962973b/.github/workflows/all.yml#L243-L247

They are just zipped to schlep the binaries between the build and publish-github steps. That said, we need to verify that we are not going to run out of space pretty quickly with the sizable binary artifacts this is going to generate. I see they recently added an option to specify the retention: actions/upload-artifact@59018c2

This doesn't replicate what was done in the packaging/ directory, but we
can iterate on the details.
assets+=("-a" "$asset")
done
hub release create "${assets[@]}" -m "$VERSION" -m "$(cat ./release\ notes/${VERSION}.md)" "$VERSION"
# - name: Upload packages to Bintray
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume you had these commented out while you were experimenting in your own repo? We can uncomment them now, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These commands actually haven't been tested yet, as we don't have a test repository in Bintray and I don't want to accidentally publish something... I added the secrets in the repo and am fairly confident they'll work, but I'm apprehensive about testing them during an actual release. Any ideas how we could test them? Maybe setup a mock Bintray server and upload there?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm... bintray have an open-source tier: https://bintray.com/signup/oss

can't we make another free account like that and add its credentials to your k6 fork, so you can fully test everything, including the artifact upload? after all, your k6 fork is still open-source, so it should count, we'd just be using the account for testing purposes (which should be publicly mentioned in the description, so some k6 users don't stumble on it by mistake from Google or something)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hhmm OK, let me see if I can set that up.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am still fo the opinion that we can test on release ... if it works - great. If it doesn't you will need to the more or less the same thing as you do now to fix it but without half the changing of stuff around to push to another repo and changing credentials and then back.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, if it's complicated, then you can just uncomment the bintray lines and we'll cross our fingers during the next release 🤞 😅 It'd be ideal if we had a second bintray account and can test everything in a fork though, since that would allow us to more easily iterate on and test the CI scripts for any future modifications. Half of the tediousness of CI scripts comes from the fact that it's difficult to test them...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe merge this with uncommented lines and if there is time left before the release (even in the last few days) this can be worked on ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It wasn't much work to create a separate Bintray account and identical repos, but I'm getting a 400 on upload, which could be anything... I'll see if I can quickly troubleshoot this without exposing the secrets, but I'd rather ensure this works now than have to do it during a release.

Copy link
Member

@na-- na-- Oct 12, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can expose things like the API key for your new test bintray account and then just reset it from bintray, if that will help with the debugging
and yeah, for a 400 status, 👍 for debugging now and not under pressure after a release

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, so the 400 was caused because the packages in Bintray needed to be manually created. After doing that, and fixing a GPG_PASSPHRASE secret name typo in my repo, the packages uploaded fine in this build, and you can see them here. I had to disable all tests as I was getting a flaky failure on previous builds... 😞 The flakiness is really a major problem that's wasting a lot of time in day-to-day work, and gives off a bad impression to external contributors.

Anyway, I'll clean things up and squash some commits, and push the final version to this PR in a few minutes.

There's also an add-path deprecation warning I just now noticed. Will look into it if it's a simple fix.

imiric pushed a commit to imiric/k6 that referenced this pull request Oct 12, 2020
imiric pushed a commit to imiric/k6 that referenced this pull request Oct 12, 2020
imiric pushed a commit to imiric/k6 that referenced this pull request Oct 12, 2020
imiric pushed a commit to imiric/k6 that referenced this pull request Oct 12, 2020
imiric pushed a commit to imiric/k6 that referenced this pull request Oct 12, 2020
imiric pushed a commit to imiric/k6 that referenced this pull request Oct 12, 2020
imiric pushed a commit to imiric/k6 that referenced this pull request Oct 12, 2020
imiric pushed a commit that referenced this pull request Oct 12, 2020
imiric pushed a commit that referenced this pull request Oct 12, 2020
@imiric
Copy link
Contributor Author

imiric commented Oct 12, 2020

I fixed the add-path deprecation warning, and added retention-days: 7 to the upload-artifact steps.

Let's not forget to disable the CircleCI/AppVeyor webhooks in the repo settings so we can see passing builds, which could be done after this is merged as well.

@imiric imiric requested review from mstoykov and na-- October 12, 2020 12:12
Copy link
Member

@na-- na-- left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🤞

@imiric imiric merged commit c22338c into master Oct 12, 2020
@imiric imiric deleted the feat/ci-overhaul branch October 12, 2020 15:13
@na-- na-- added this to the v0.29.0 milestone Oct 12, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Don't build docker containers if we aren't gonna push them
3 participants