Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

629 speedup GitHub actions #650

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open

Conversation

stan-dot
Copy link
Collaborator

@stan-dot stan-dot commented Sep 25, 2024

Fixes #629

@stan-dot stan-dot added enhancement New feature or request github_actions Pull requests that update GitHub Actions code labels Sep 25, 2024
@stan-dot stan-dot self-assigned this Sep 25, 2024
@stan-dot stan-dot linked an issue Sep 25, 2024 that may be closed by this pull request
Copy link

codecov bot commented Sep 25, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 92.62%. Comparing base (cd91f0f) to head (26be98c).

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #650   +/-   ##
=======================================
  Coverage   92.62%   92.62%           
=======================================
  Files          35       35           
  Lines        1654     1654           
=======================================
  Hits         1532     1532           
  Misses        122      122           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@stan-dot stan-dot marked this pull request as ready for review September 27, 2024 13:27
@stan-dot
Copy link
Collaborator Author

@gilesknap how can I actually measure if this was successful in speeding up? I'd like to compare against a recent successful merge like this one
#640

@stan-dot
Copy link
Collaborator Author

maybe I do need to add 'cache-from to all the workflows?

#650 #629 I made an attempt caching the python env, it gain you about
30-90 per work flow build not sure if it is worth it or not. We can
probably play the same game with the container part of the work-flow
which is where the real bottle neck is.

@stan-dot Anyway see what you think,
@stan-dot
Copy link
Collaborator Author

stan-dot commented Oct 4, 2024

it applies only when we run a new branch but the cache hits - so this PR's pipeline time is normal as it still had to create the cache.

has this kind of caching been considered for hyperion, @DominicOram ? or have there been any counterindications?

@DominicOram
Copy link
Contributor

has this kind of caching been considered for hyperion, @DominicOram ? or have there been any counterindications?

Our actions don't take too long and developers generally run the test suite locally before pushing to catch errors anyway so it's rare the time it takes for actions to run slows us down at all.

@DominicOram
Copy link
Contributor

I would also worry about caching too much and missing things, it's quite valuable to know that a PR works from a clean slate.

@gilesknap
Copy link
Contributor

I would also worry about caching too much and missing things, it's quite valuable to know that a PR works from a clean slate.

Note that with respect to container build cache it only hits on layers that have not changed - However, I suppose that if a layer is fetching a remote resource and not locking the version of that resource then the cache could mask issues.

@stan-dot
Copy link
Collaborator Author

stan-dot commented Oct 7, 2024

@callumforrester based on all the discussion above, is the functionality from this PR desirable and pursuing more caching is not needed at the moment?

@stan-dot
Copy link
Collaborator Author

stan-dot commented Oct 7, 2024

still I think we could lint - test - build docs - dist all in one job, using one cache. always clearing the cache on the first job, if possible

@Relm-Arrowny
Copy link

Just a note, the way caches is set up on this atm will always run pip install after caching the env which mean it will update before running test so it should not miss anything and it should also down grade correctly.
As for making it lint - test - build docs - dist all in one job, it is possible but you lose a bit of the speed as they are slightly different so they will have to install those, plus because of the racing condition of the single caches you lose a bit more time time as you only get to update cache the one finish first which is most likely be the dist or lint.

@callumforrester
Copy link
Collaborator

Agree with @Relm-Arrowny, additionally if we condensed all the CI into one job we would lose the nice display showing which parts have passed and which parts have failed, which I find useful when reviewing a PR.
@stan-dot I'm happy to examine your workflow with you and see if we can figure out why the CI is bottlenecking you, it isn't bottlenecking me.

@callumforrester
Copy link
Collaborator

@stan-dot I'm inclined to close this unless you seriously object

@stan-dot
Copy link
Collaborator Author

@callumforrester I don't mean 'all CI in one job', just to make the docs go off the dev tests, and there also the dist.

It is not bottlenecking me personally

@stan-dot
Copy link
Collaborator Author

@callumforrester I agree we don't need to deepen the caching level. is the caching in this PR not fine? some caching is a best practice afaik

@callumforrester
Copy link
Collaborator

It depends, I'm unclear on whether caching was not included in the copier template due to lack of time or for a specific reason. @coretl or @gilesknap can probably comment.

@gilesknap
Copy link
Contributor

I believe we worked out that the costs were greater than the benefits for a small python container. I do use cache to great effect on generic IOCs. In fact reflecting on how fast container builds on GitHub were just yesterday, for iic-adandor3, maybe this is worth revisiting.

@gilesknap
Copy link
Contributor

Oh just realized that this is not about container build. In case it was not obvious my comments were about containers. We did look at cache for copier template container build only and decided against.

@coretl
Copy link
Contributor

coretl commented Oct 16, 2024

I would prefer to investigate how long it would take to pixi install the deps rather than pip install. Wheels are just zip files after all, so if the installer (pixi rather than pip) is fast enough, then it should get close to the speed of restoring a cache without the complexity of cache invalidation (without a lockfile we intentionally pull the latest version of things).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request github_actions Pull requests that update GitHub Actions code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Speedup github actions
6 participants