Replies: 4 comments 19 replies
-
Hello, |
Beta Was this translation helpful? Give feedback.
-
From the documentation you can supply |
Beta Was this translation helpful? Give feedback.
-
how many layers is "the backbone"? |
Beta Was this translation helpful? Give feedback.
-
It's still getting updated because the I freeze the layers completely here by putting |
Beta Was this translation helpful? Give feedback.
-
Hello, I'm currently attempting to freeze the backbone of YOLOv8 for fine-tuning purposes. As a baseline, I tried training the model with all layers frozen. Surprisingly, it produced favorable results. Initially, I believed this to be an issue with PyTorch's implementation of
requires_grad = False
. However, after perusing the documentation at the following link:Transfer Learning with Frozen Layers
It appears this is the expected behavior.
Consequently, I'm trying to understand how the weights and biases can still update even when
requires_grad = False
has been applied to all layers.Here's my code:
I also tried to set the momentum and weight_decay to zero after freezing the layers, as one user suggested that the model might still be learning because these parameters might still provide access to gradients. Any insights would be appreciated.
Beta Was this translation helpful? Give feedback.
All reactions