-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A Classification Problem with Trained Model #1391
Comments
overfit,simpler net may work |
oh sorry ,if you use valid dataset for test in training , the problem seems just occurs during testing procedure,you can try to test in C++. |
You validation set cannot have the same distributions. Errors can be:
I do not use the python interface for some reasons. You can try to print the prediction within caffe. Adjust the solver.prototxt:
And instead of the softmax__loss_ layer in your train_val.prototxt:
you can use a softmax-layer and:
The output will be
use
Then simply
if your validation set has 800 images. You may compute the correct error from there. |
@LawBow Hi, |
iam also in the similar situation i tried changing augmentation as mentioned but haven't worked .Please someone help solve it.i may attach the few lkstlines of log file here.I0413 06:08:56.441606 22206 net.cpp:159] Memory required for data: 5099532756 |
I build a 250 categories training dataset, each category has more than 10k images. Following the Imagenet training steps, I got 92% accuracy in test.
I use such trained model and test valid dataset. the valid dataset random selected from database, so the data distribution is the same with training and test data. I also subtract the trained dataset mean file. But the test result is 44%, I don't know what's the problem?
The text was updated successfully, but these errors were encountered: