You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I've always been so confused about how could BWN and XNOR-net be trained on large neural network such as vgg-16 or resent-50?
I find it quite difficult to change all the layers into binarized layer at one time, because there is often the gradient explosion or the gradient diminish happens during training time. And I think that change a layer at one time may be able to solve the problem. But is there any approach to deal with it without having to train the binarized layer separately?
The text was updated successfully, but these errors were encountered:
Hello, I've always been so confused about how could BWN and XNOR-net be trained on large neural network such as vgg-16 or resent-50?
I find it quite difficult to change all the layers into binarized layer at one time, because there is often the gradient explosion or the gradient diminish happens during training time. And I think that change a layer at one time may be able to solve the problem. But is there any approach to deal with it without having to train the binarized layer separately?
The text was updated successfully, but these errors were encountered: