Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix ASV CI Workflow #792

Merged
merged 30 commits into from
May 17, 2024
Merged

Fix ASV CI Workflow #792

merged 30 commits into from
May 17, 2024

Conversation

philipc2
Copy link
Member

@philipc2 philipc2 commented May 16, 2024

@philipc2 philipc2 self-assigned this May 16, 2024
@philipc2 philipc2 added the CI Continuous Integration label May 16, 2024
@philipc2
Copy link
Member Author

@philipc2
Copy link
Member Author

@kafitzgerald @anissa111 @erogluorhan

I've put together similar fixes to what is in geocat-comp, but workflow is still failing. Any ideas?

@philipc2
Copy link
Member Author

When the workflow tries to run the benchmark, the following error occurs:

File "/home/runner/miniconda3/envs/asv/lib/python3.11/site-packages/asv/plugins/git.py", line 123, in get_hashes_from_range
STDERR -------->
fatal: bad revision '2024.02.0..main'

output = self._run_git(args, valid_return_codes=(0, 1), dots=False)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/home/runner/miniconda3/envs/asv/lib/python3.11/site-packages/asv/plugins/git.py", line 70, in _run_git
return util.check_output([self._git] + args, env=env, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/miniconda3/envs/asv/lib/python3.11/site-packages/asv/util.py", line 618, in check_output
raise ProcessError(args, retcode, stdout, stderr)
asv.util.ProcessError: Command '/usr/bin/git rev-list --first-parent 2024.02.0..main --' returned non-zero exit status 128

Something seems to be off with how it's fetching the package version?

@kafitzgerald
Copy link
Collaborator

Is it dropping the "v" on the version tag?

I wonder you need to remove the shell specification from that run benchmarks step now that you have it specified above for the job and maybe that's causing some issues. Not sure though.

@philipc2
Copy link
Member Author

Running git tag locally shows the following (including all the tags we've had):

v2023.06.0
v2023.08.0
v2023.09.0
v2023.09.1
v2023.10.0
v2023.10.1
v2023.11.0
v2023.11.1
v2024.01.0
v2024.01.1
v2024.02.0
v2024.03.0
v2024.04.0

@philipc2
Copy link
Member Author

Same issue when dropping the v from the tag

https://github.com/UXARRAY/uxarray/actions/runs/9116712536/job/25065902744

@kafitzgerald
Copy link
Collaborator

Sorry, I meant to suggest that the shell re-specification might be causing an issue and that might be why it's struggling to find the tag. I'm not sure about this though.

@philipc2
Copy link
Member Author

philipc2 commented May 16, 2024

Sorry, I meant to suggest that the shell re-specification might be causing an issue and that might be why it's struggling to find the tag. I'm not sure about this though.

I see! I tried a couple variations with and without it, and that doesn't seem to be the issue either. I appreciate the help though!

@anissa111
Copy link
Member

Huh! I don't know what's happening from just looking, but I'll take a look.

@philipc2
Copy link
Member Author

philipc2 commented May 16, 2024

Now that the benchmarks are actually running, it appears that the ones added in #775 seem to be broken. The other ones do seem to be running though!

Looking into fixing this as part of this PR.

@anissa111
Copy link
Member

Also, I noticed while running a bunch of these benchmarks that your environment caching isn't actually working on this workflow. I don't know if we're having the same issue on geocat-comp

If you check the recent actions, a lot should be caching this key and aren't.
image

@philipc2
Copy link
Member Author

I think I just realized that none of the changes I was making to the benchmarks were actually being reflected when running the Github Actions workflows.

@erogluorhan
Copy link
Member

erogluorhan commented May 17, 2024

I think I just realized that none of the changes I was making to the benchmarks were actually being reflected when running the Github Actions workflows.

Is it because of this or something else: If you were re-running the workflow again and again through the "Re-run jobs" button of an older, failed action, you are right, it should be using the older version of the workflow file at the time of the related commit even if you are changing the workflow file in the meantime. However, if you use the "Run workflow" button of any workflow and choose the branch you want, then you get the action to use your up-to-date workflow file of that branch.

EDIT: Never mind, I just noticed you meant the changes on the actual benchmarks, not the workflow file. Do you have an idea why the benchmark changes are not being reflected?

@philipc2
Copy link
Member Author

philipc2 commented May 17, 2024

I think I just realized that none of the changes I was making to the benchmarks were actually being reflected when running the Github Actions workflows.

Is it because of this or something else: If you were re-running the workflow again and again through the "Re-run jobs" button of an older, failed action, you are right, it should be using the older version of the workflow file at the time of the related commit even if you are changing the workflow file in the meantime. However, if you use the "Run workflow" button of any workflow and choose the branch you want, then you get the action to use your up-to-date workflow file of that branch.

EDIT: Never mind, I just noticed you meant the changes on the actual benchmarks, not the workflow file. Do you have an idea why the benchmark changes are not being reflected?

After some tinkering, I believe its because the workflow being triggered is using the benchmarks that are currently stored in the main branch, so any changes I made to the mpas_ocean benchmark were not reflected.

I made the same changes on my fork (and updated the workflow there) and the benchmark is working as expected now.

A small snippet of the mpas_ocean functions executing properly now.

[ 1.56%] ··· mpas_ocean.Gradient.time_gradient ok
[ 1.56%] ··· ============ =============
resolution
------------ -------------
480km 289±0.9μs
120km 2.80±0.01ms
============ =============
[ 1.63%] ··· mpas_ocean.Integrate.peakmem_integrate ok
[ 1.63%] ··· ============ ======
resolution
------------ ------
480km 230M
120km 252M
============ ======

@philipc2
Copy link
Member Author

Also, I noticed while running a bunch of these benchmarks that your environment caching isn't actually working on this workflow. I don't know if we're having the same issue on geocat-comp

If you check the recent actions, a lot should be caching this key and aren't. image

Looks like this is also an issue in geocat-comp.

From the most recent run there:

image

@philipc2
Copy link
Member Author

Interesting, on run 68, we had a successfully environment cache.

image

@kafitzgerald
Copy link
Collaborator

Also, I noticed while running a bunch of these benchmarks that your environment caching isn't actually working on this workflow. I don't know if we're having the same issue on geocat-comp
If you check the recent actions, a lot should be caching this key and aren't. image

Looks like this is also an issue in geocat-comp.

From the most recent run there:

image

Seems like you already figured out what you needed, but I think this is because the cache needs to be from the current date and the action has to run successfully to create the cache. The action on comp seems to fail right now if there's not a new commit and then doesn't create the cache, which makes testing difficult.

Anyway, I think it's actually working fine - just picky.

Copy link
Collaborator

@kafitzgerald kafitzgerald left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me though I didn't review the changes in your benchmarks. Someone else likely has more context on those.

@philipc2
Copy link
Member Author

Also, I noticed while running a bunch of these benchmarks that your environment caching isn't actually working on this workflow. I don't know if we're having the same issue on geocat-comp
If you check the recent actions, a lot should be caching this key and aren't. image

Looks like this is also an issue in geocat-comp.
From the most recent run there:
image

Seems like you already figured out what you needed, but I think this is because the cache needs to be from the current date and the action has to run successfully to create the cache. The action on comp seems to fail right now if there's not a new commit and then doesn't create the cache, which makes testing difficult.

Anyway, I think it's actually working fine - just picky.

Agreed, I'll keep it an eye on it. I looked over the documentation & examples of how to cache environments, and nothing in our use case seems different.

@philipc2 philipc2 merged commit 2a4bcee into main May 17, 2024
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CI Continuous Integration
Projects
None yet
Development

Successfully merging this pull request may close these issues.

CI Failure: ASV Benchmarking
4 participants