You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have such warning pytorch-vqa/model.py:96: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
Even though I use self.lstm.flatten_parameters() before _, (_, c) = self.lstm(packed), the programme consume almost all of my memory (16G), which is abnormal. In the issues before, you states that you can run epoch in 7 min, which I guess is because you have SSD.
Let's check the code to see what causes memory leak :)
The text was updated successfully, but these errors were encountered:
Are you running it on multiple GPUs? I've only seen that warning on that setup. Running it on 1 GPU, I get a usage of just under 3 GiB; on 2 GPUs, it goes up to a total of 4 GiB (~2 GiB each).
Are you sure that you have set up updated versions of PyTorch, CUDA and cuDNN correctly?
I have such warning
pytorch-vqa/model.py:96: UserWarning: RNN module weights are not part of single contiguous chunk of memory. This means they need to be compacted at every call, possibly greatly increasing memory usage. To compact weights again call flatten_parameters().
Even though I use
self.lstm.flatten_parameters()
before_, (_, c) = self.lstm(packed)
, the programme consume almost all of my memory (16G), which is abnormal. In the issues before, you states that you can run epoch in 7 min, which I guess is because you have SSD.Let's check the code to see what causes memory leak :)
The text was updated successfully, but these errors were encountered: