Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BugFix] Fix invalid CUDA ID error when loading Bounded variables across devices #2421

Merged
merged 2 commits into from
Sep 5, 2024

Conversation

cbhua
Copy link
Contributor

@cbhua cbhua commented Sep 4, 2024

Description

This pull request resolves an invalid CUDA ID error that occurs when transferring a Bounded variable between servers with different numbers of GPUs.

In the current implementation, when changing the device of a Bounded variable, there is:

low=self.space.low.to(dest),
high=self.space.high.to(dest),

This operation first attempts to move self.space._low to its self.space.device before transferring to the target device (dest):

@property
def low(self):
return self._low.to(self.device)
@property
def high(self):
return self._high.to(self.device)

This process leads to errors when a variable previously on cuda:7 (in an 8 GPU server) is loaded on a server with only one GPU, as it incorrectly attempts to access cuda:7.

Motivation and Context

The issue was identified when a model was trained and saved on a multi-GPU cluster and subsequently loaded on a local server equipped with fewer GPUs. The model’s saved state includes device information specific to the original multi-GPU environment. When attempting to assign the model to a device available on the current server, the discrepancy in device IDs between the environments leads to this bug.

This PR fixes #2420 .

  • I have raised an issue to propose this change (required for new features and bug fixes)

Types of changes

  • Bug fix (non-breaking change which fixes an issue)

To resolve this bug, I have adjusted the approach to device assignment. Instead of calling the saved device information from the previous cluster, we directly update the device information to match the current server’s available hardware.

In the meantime, I think there is repeat function code here:

@low.setter
def low(self, value):
self.device = value.device
self._low = value.cpu()
@high.setter
def high(self, value):
self.device = value.device
self._high = value.cpu()
@low.setter
def low(self, value):
self.device = value.device
self._low = value.cpu()
@high.setter
def high(self, value):
self.device = value.device
self._high = value.cpu()

Checklist

  • I have read the CONTRIBUTION guide (required)
  • I have updated the tests accordingly (required for a bug fix or a new feature).

Copy link

pytorch-bot bot commented Sep 4, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/rl/2421

Note: Links to docs will display an error until the docs builds have been completed.

❌ 3 New Failures, 1 Cancelled Job, 2 Pending, 6 Unrelated Failures

As of commit 9b22619 with merge base df4fa78 (image):

NEW FAILURES - The following jobs have failed:

CANCELLED JOB - The following job was cancelled. Please retry:

FLAKY - The following jobs failed but were likely due to flakiness present on trunk:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link

Hi @cbhua!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 4, 2024
@cbhua cbhua marked this pull request as draft September 5, 2024 06:34
@vmoens vmoens added the bug Something isn't working label Sep 5, 2024
Copy link
Contributor

@vmoens vmoens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM thanks!

@vmoens
Copy link
Contributor

vmoens commented Sep 5, 2024

Feel free to mark is as ready for review for me to merge it

@cbhua
Copy link
Contributor Author

cbhua commented Sep 5, 2024

Thanks for your review! @vmoens, I saw some unpassed pytest checks, so I made this PR draft. It seems those are from GitHub action settings or dependency packages, so should be fine. I will make it ready to merge!

@cbhua cbhua marked this pull request as ready for review September 5, 2024 10:37
@vmoens
Copy link
Contributor

vmoens commented Sep 5, 2024

Errors in CI are not related to this PR, happy to merge it

@vmoens vmoens merged commit 57f0580 into pytorch:main Sep 5, 2024
64 of 74 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] Invalid CUDA ID error when loading Bounded variables across devices
3 participants