-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pullRequests.frequency
config doesn't work if the Scala Steward run has failures
#60
Comments
rtyley
changed the title
Aug 11, 2023
pullRequests.frequency
configuration will not work if the Scala Steward run is partially failingpullRequests.frequency
config won't work if the Scala Steward run is partially failing
rtyley
changed the title
Aug 11, 2023
pullRequests.frequency
config won't work if the Scala Steward run is partially failingpullRequests.frequency
config won't work if the Scala Steward run is failing
rtyley
changed the title
Aug 11, 2023
pullRequests.frequency
config won't work if the Scala Steward run is failingpullRequests.frequency
config doesn't work if the Scala Steward run is failing
rtyley
changed the title
Aug 11, 2023
pullRequests.frequency
config doesn't work if the Scala Steward run is failingpullRequests.frequency
config doesn't work if the Scala Steward run has failures
rtyley
added a commit
to guardian/sponsorship-expiry-email-lambda
that referenced
this issue
Feb 27, 2024
The guardian/sponsorship-expiry-email-lambda has some code in it that requires Java 8, but it was recently added to our Scala Steward run (https://github.com/guardian/scala-steward-public-repos?tab=readme-ov-file#how-to-add-a-new-public-repo-for-scanning-by-scala-steward) and Scala Steward is running using Java 11 - so this new repo was causing the Scala Steward run to fail: https://github.com/guardian/scala-steward-public-repos/actions/runs/8064282983/job/22027855642#step:5:1753 ...that's bad, especially for the reason described in guardian/scala-steward-public-repos#60. This change updates guardian/sponsorship-expiry-email-lambda to use Java 11, and also updates sbt to the latest version to make it run on our modern M1 laptops.
This was referenced Mar 15, 2024
ioannakok
added a commit
to guardian/scala-steward
that referenced
this issue
May 31, 2024
When even one repository fails in a scala steward run the workspace is not persisted by GitHub Actions because GH Actions won't persist the workspace of a failing job. Persisting workspace is crucial for respecting `pullRequests.frequency` configuration because the workspace is how scala steward records that it has opened a pull request. This means that one failing repository can cause a lot of user annoyance because their configuration is no longer respected and they get too many PRs opened. This change aims to fix that by only returning a failure code if *all* repos have failed. So long as at least one repository succeeded the exit code will be success and the workspace will persist. If administrators need to know what repos are failing they can use the jobs summary introduced by: scala-steward-org#3071 See more here: guardian/scala-steward-public-repos#60 Co-authored-by: Roberto Tyley <[email protected]>
ioannakok
added a commit
to guardian/scala-steward
that referenced
this issue
May 31, 2024
When even one repository fails in a scala steward run the workspace is not persisted by GitHub Actions because GH Actions won't persist the workspace of a failing job. Persisting workspace is crucial for respecting `pullRequests.frequency` configuration because the workspace is how scala steward records that it has opened a pull request. This means that one failing repository can cause a lot of user annoyance because their configuration is no longer respected and they get too many PRs opened. This change aims to fix that by only returning a failure code if *all* repos have failed. So long as at least one repository succeeded the exit code will be success and the workspace will persist. If administrators need to know what repos are failing they can use the jobs summary introduced by: scala-steward-org#3071 See more here: guardian/scala-steward-public-repos#60 Co-authored-by: Roberto Tyley <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
From Scala Steward's documentation, it's important that Scala Steward's filesystem workspace is persisted between GitHub Action runs:
So, if the Scala Steward run is even partially failing (eg. due to just one troublesome repo),
pullRequests.frequency
configuration will not work, because the filesystem-backedPullRequestRepository
won't retain details on any new PRs created by Scala Steward - GHA filesystem caching only takes place if the action was successful.Recent example
The Public Repos Scala Steward GHA workflow has been failing for the past 3 weeks (due to guardian/typerighter#384, see guardian/typerighter#384 (comment)), and over that time Scala Steward has not been able to record any new PRs it's made - so has gradually started to revert to the annoying behaviour of raising new PRs for AWS artifacts every single day (eg guardian/play-secret-rotation#406 & guardian/play-secret-rotation#409):
This is unfortunate given the recent work on scala-steward-org/scala-steward#3102 .
Possible actions
Persist filesystem even if GHA fails
Ideally filesystem persistence would occur even if the GHA failed - it looks like this actually has been added to the Scala Steward GitHub Action? :
This was released with
scala-steward-action
v2.67.0 in September 2024, which we upgraded to with this PR, merged 30th October:Stop Scala Steward returning a 'failure' exit code when only some repos have failed
This can be enabled using the new
--exit-code-success-if-any-repo-succeeds
flag - but do we want to do this, now that scala-steward-org/scala-steward-action#631 has been released?Alert teams when one of their repos fails
It would be good to alert teams when one of their repos is causing failure in Scala Steward - scala-steward-org/scala-steward#3071 was a step towards making responsibility a bit more visible, but was only a first step, and didn't provide alerting (also, the error from guardian/typerighter#384 didn't even result in a usable summary report).
The text was updated successfully, but these errors were encountered: