Questions about fine-tuning KerasCV models #2446
Unanswered
Justim-Chung
asked this question in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have read some documentation on Keras, and I have a few questions about how to fine-tune Keras models. I hope to get some answers from the community.
In Keras.Applications, each pre-trained model has a corresponding preprocess_input layer. However, it seems that the KerasCV models do not require a preprocess_input layer. Is this correct?
The official Keras documentation specifically mentions that when fine-tuning, attention should be paid to the 'training' parameters of the batch_normalization layer. Should we also consider the training parameters of the batch_normalization layer when fine-tuning KerasCV models?
The official Keras documentation suggests setting the trainable parameter to freeze certain layers of the model during fine-tuning. However, in some Kaggle notebooks (such as this one), I have noticed that they do not freeze the layers of the model. Which approach is considered best practice?
Thank you all for your answers!
Beta Was this translation helpful? Give feedback.
All reactions