-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CI Problem] x386 CI running out of RAM #10180
Comments
we could also try to investigate why pytest-forked doesn't like GPUs. could you post any information you have about that? |
I attempted to fix this using pytest --forked here: #10174 But it failed a lot of tests on a lot of jobs related to GPU. I got the feeling that initializing the GPU interface on the main thread and then trying to access it from the forked thread broke an assumption somewhere in the stack, but I didn't dig very deeply on what the root cause might be. |
@mbrookhart Can you point to the failed log from a job in #10026? |
Hi everyone. Let's see how the current build will end up (in theory the docs should be ok this time, and the Windows buikd too, it was just due to a URL change for the docs, and to a Github maintenance for the Windows build). |
cautiously closing this since we've changed the CI infra good bit in the meantime, please re-open if this happens again |
On a recent PR that added a few extra tests to Relay, we discovered that pytest was running over the 4GB RAM limit on the x386 CI job. We fixed this by reducing the memory use of the failing test ~10%, but we're getting to the point in our test size were running
pytest tests/python/relay
seems to be accumulating too much in RAM via the tests and pytest logs to actually run on x386. I imagine we'll hit this again in the future, should we perhaps write a bash script to run the test files 1 by 1 for the 32 bit job?cc @driazati @areusch
Also wondering if @leandron might have some thoughts.
Branch/PR Failing
#10026
The text was updated successfully, but these errors were encountered: