We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
If PyTorch isn't built with distributed support (commonly the case on macOS), many features of lightly crash with an error. For example:
lib/python3.10/site-packages/lightly/loss/ntx_ent_loss.py:164: in forward labels = labels + dist.rank() * batch_size _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ def rank() -> int: """Returns the rank of the current process.""" > return dist.get_rank() if dist.is_initialized() else 0 E AttributeError: module 'torch.distributed' has no attribute 'is_initialized'
Any time torch.distributed is used, it should be wrapped by:
torch.distributed
if torch.distributed.is_available():
Lightning themselves have introduced and fixed this same bug several times:
The text was updated successfully, but these errors were encountered:
Ah, thanks for bringing this up! We use dist in multiple places and should definitely check for it.
Sorry, something went wrong.
This should be fixed with #1180.
No branches or pull requests
If PyTorch isn't built with distributed support (commonly the case on macOS), many features of lightly crash with an error. For example:
Any time
torch.distributed
is used, it should be wrapped by:Lightning themselves have introduced and fixed this same bug several times:
The text was updated successfully, but these errors were encountered: