Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The runner has received a shutdown signal #353

Closed
grische opened this issue May 11, 2023 · 10 comments · Fixed by #552
Closed

The runner has received a shutdown signal #353

grische opened this issue May 11, 2023 · 10 comments · Fixed by #552

Comments

@grische
Copy link

grische commented May 11, 2023

While running the action-gh-release action, we regularly encounter issues that the operation has been cancelled.
It failed after 17 minutes, so I assume it is not a timeout.

It seems to be happening after the last uploaded concluded.

Here is an example running with Debug enabled: https://github.com/freifunkMUC/site-ffm/actions/runs/4940203059/jobs/8842882643

Here are the last lines before it failed:

⬆️ Uploading x86-legacy_output.tar.gz...
##[debug]Re-evaluate condition on job cancellation for step: 'Create Release & Upload Release Assets'.
##[debug]Skip Re-evaluate condition on runner shutdown.
Error: The operation was canceled.
##[debug]System.OperationCanceledException: The operation was canceled.
##[debug]   at System.Threading.CancellationToken.ThrowOperationCanceledException()

Other (non-expired) examples are:

@grische
Copy link
Author

grische commented Jun 2, 2023

@softprops is there anything else we can provide to help debugging the issue?

@smallprogram
Copy link

like this ,workflow:

jobs:
  job_init: 
    runs-on: ubuntu-latest
    steps:
    - name: Generate Image
      id: generate_image
      run: |
        mkdir output
        cd $GITHUB_WORKSPACE/output
        dd if=/dev/zero of=outputfile.img.00 bs=1M count=1900
        dd if=/dev/zero of=outputfile.img.01 bs=1M count=1900
        dd if=/dev/zero of=outputfile.img.02 bs=1M count=1900
        dd if=/dev/zero of=outputfile.img.03 bs=1M count=1900
        dd if=/dev/zero of=outputfile.img.04 bs=1M count=1900
        dd if=/dev/zero of=outputfile.img.05 bs=1M count=1900
        echo "FIRMWARE=$PWD" >> $GITHUB_ENV
        echo "status=success" >> $GITHUB_OUTPUT
    - name: Upload Toolchain to release
      uses: softprops/[email protected]
      if: steps.generate_image.outputs.status == 'success'
      env:
        GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
      with:
        tag_name: 'test'
        files: ${{ env.FIRMWARE }}/*    

return

The runner has received a shutdown signal. This can happen when the runner service is stopped, or a manually started runner is canceled.

image

@smallprogram
Copy link

Is there anyone who can help fix this issue? It bothers me so much

@grische
Copy link
Author

grische commented Sep 15, 2023

@softprops is it possible to limit the number of files uploaded? If we send them one by one it might work properly.

@smallprogram
Copy link

@grische
I tried it and found that this problem occurs when the remaining storage space of /dev/root is less than the total size of the files to be uploaded. When the remaining storage space of /dev/root is larger than the total size of the files to be uploaded, there will be no problem. I don't know if it will be of any use to you.
I don’t know if this is a solution to the problem, but for now, it works for me

@seppzer0
Copy link

I encountered the same issue a few hours ago, while trying to release a few large files via this GitHub Action.
@softprops , any ideas for a workaround?

@seppzer0
Copy link

I confirmed that this issue is relevant to this particular GitHub action only.

I used a similar release action to publish the same amount of files with 4.9 Gb size in total. And it worked just fine.
изображение

In comparison, here is the same pipeline using your action.
изображение

@enumag
Copy link

enumag commented Nov 5, 2023

I ran into the same issue. I believe it is because the runner ran out of memory because this tool loads all the files into memory before uploading them:

data: readFileSync(path),

Well in my case it was 4 files, each around 1.5 GB so 6 GB total so no wonder it was too much.

It is of course a big mistake that this is happening. Upload (especially upload with potentially large files) should always be written in a way that doesn't load them into memory all at once but just one chunk at a time.

@enumag
Copy link

enumag commented Nov 5, 2023

Did someone find an alternative tool that doesn't have this issue? There is a ton on GitHub Actions Marketplace but so far I couldn't find one that would be similarly feature complete and not have this bug.

@enumag
Copy link

enumag commented Nov 5, 2023

@softprops It shouldn't be too difficult to fix this bug. Most likely you just need to use this: https://www.geeksforgeeks.org/node-js-fs-createreadstream-method/

yzh119 added a commit to flashinfer-ai/flashinfer that referenced this issue Mar 11, 2024
…n dispatch (#173)

The release action
[failed](https://github.com/flashinfer-ai/flashinfer/actions/runs/8227731974/job/22501369048)
because
[action-gh-release](https://github.com/softprops/action-gh-release)
action do not support uploading multiple large files at a time:
softprops/action-gh-release#353

This PR changes the behavior of release job to upload artifacts in
multiple batches.

Also, #172 removes the
instantiation of page prefill kernels for `page_size=8`, this PR fixes
the behavior of `DISPATCH_PAGE_SIZE` by removing corresponding branches.
xen0n added a commit to xen0n/action-gh-release that referenced this issue Dec 6, 2024
Previously all assets were being read synchronously into memory, making
the action unsuitable for releasing very large assets. Because the
client library allows stream body inputs (it just forwards it to the
underlying `fetch` implementation), just do it.

Fixes: softprops#353
Signed-off-by: WANG Xuerui <[email protected]>
xen0n added a commit to xen0n/action-gh-release that referenced this issue Dec 6, 2024
Previously all assets were being read synchronously into memory, making
the action unsuitable for releasing very large assets. Because the
client library allows stream body inputs (it just forwards it to the
underlying `fetch` implementation), just do it.

The idea is also suggested by @enumag in
softprops#353 (comment).

Fixes: softprops#353
Signed-off-by: WANG Xuerui <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants