-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
629 speedup GitHub actions #650
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #650 +/- ##
=======================================
Coverage 92.62% 92.62%
=======================================
Files 35 35
Lines 1654 1654
=======================================
Hits 1532 1532
Misses 122 122 ☔ View full report in Codecov by Sentry. |
@gilesknap how can I actually measure if this was successful in speeding up? I'd like to compare against a recent successful merge like this one |
maybe I do need to add 'cache-from to all the workflows? |
it applies only when we run a new branch but the cache hits - so this PR's pipeline time is normal as it still had to create the cache. has this kind of caching been considered for hyperion, @DominicOram ? or have there been any counterindications? |
Our actions don't take too long and developers generally run the test suite locally before pushing to catch errors anyway so it's rare the time it takes for actions to run slows us down at all. |
I would also worry about caching too much and missing things, it's quite valuable to know that a PR works from a clean slate. |
Note that with respect to container build cache it only hits on layers that have not changed - However, I suppose that if a layer is fetching a remote resource and not locking the version of that resource then the cache could mask issues. |
@callumforrester based on all the discussion above, is the functionality from this PR desirable and pursuing more caching is not needed at the moment? |
still I think we could lint - test - build docs - dist all in one job, using one cache. always clearing the cache on the first job, if possible |
Just a note, the way caches is set up on this atm will always run pip install after caching the env which mean it will update before running test so it should not miss anything and it should also down grade correctly. |
Agree with @Relm-Arrowny, additionally if we condensed all the CI into one job we would lose the nice display showing which parts have passed and which parts have failed, which I find useful when reviewing a PR. |
@stan-dot I'm inclined to close this unless you seriously object |
@callumforrester I don't mean 'all CI in one job', just to make the docs go off the It is not bottlenecking me personally |
@callumforrester I agree we don't need to deepen the caching level. is the caching in this PR not fine? some caching is a best practice afaik |
It depends, I'm unclear on whether caching was not included in the copier template due to lack of time or for a specific reason. @coretl or @gilesknap can probably comment. |
I believe we worked out that the costs were greater than the benefits for a small python container. I do use cache to great effect on generic IOCs. In fact reflecting on how fast container builds on GitHub were just yesterday, for iic-adandor3, maybe this is worth revisiting. |
Oh just realized that this is not about container build. In case it was not obvious my comments were about containers. We did look at cache for copier template container build only and decided against. |
I would prefer to investigate how long it would take to |
Fixes #629