You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey everyone, first off thank you for developing this amazing library.
🐛 Describe the bug
So apparently there is a bug, which I am not sure if it is a bug related to avalanche or mps, but if you attempt to run the multihead example without the num_workers parameter set (which I would assume would default to 1), the training loop stops working and a KeyError is raised, with a seemingly random key.
🐜 To Reproduce
In lines 71 and 72 of the multihead.py example just replace with the following lines: strategy.train(train_task) strategy.eval(test_stream)
🦋 Additional context
The issue only seems to occur when running the script on 'mps' devices, as when running the same piece of code on a server with cuda the issue does not persust
The text was updated successfully, but these errors were encountered:
guilhermegog
changed the title
KeyError in Multihead example when not using num_workers on MPS
KeyError in multihead.py example when not using num_workers on MPS
Jul 9, 2024
Hey everyone, first off thank you for developing this amazing library.
🐛 Describe the bug
So apparently there is a bug, which I am not sure if it is a bug related to avalanche or mps, but if you attempt to run the multihead example without the
num_workers
parameter set (which I would assume would default to 1), the training loop stops working and a KeyError is raised, with a seemingly random key.🐜 To Reproduce
In lines 71 and 72 of the multihead.py example just replace with the following lines:
strategy.train(train_task)
strategy.eval(test_stream)
🦋 Additional context
The issue only seems to occur when running the script on 'mps' devices, as when running the same piece of code on a server with cuda the issue does not persust
The text was updated successfully, but these errors were encountered: