-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Python] Same accuracy every test #5144
Comments
Please ask modeling questions on the mailing list. This might be "working" but it's picking the most common class all the time. From https://github.com/BVLC/caffe/blob/master/CONTRIBUTING.md:
|
iam also in the similar situation i tried changing augmentation as mentioned but haven't worked .Please someone help solve it.i may attach the few lkstlines of log file here.I0413 06:08:56.441606 22206 net.cpp:159] Memory required for data: 5099532756 |
Please help me why iam getting map always 0 while doing a object detection in DIGITS using caffe |
Issue summary
Hi, everyone,
I've been using Caffe for awhile, and recently, I've faced some classification problems. I'm train CaffeNet with 3 (three) classes. The training is OK (the loss is decaying), although every test the accuracy is the same, as the example bellow:
I1231 18:37:35.865491 27118 solver.cpp:406] Test net output #0: accuracy = 0.666667 I1231 18:37:35.865628 27118 solver.cpp:406] Test net output #1: loss = 1.17102 (* 1 = 1.17102 loss) I1231 18:37:35.906298 27118 solver.cpp:229] Iteration 100, loss = 0.979326 I1231 18:37:35.906389 27118 solver.cpp:245] Train net output #0: loss = 0.979326 (* 1 = 0.979326 loss) I1231 18:37:35.906482 27118 sgd_solver.cpp:106] Iteration 100, lr = 0.001 I1231 18:37:40.332121 27118 solver.cpp:229] Iteration 150, loss = 0.633591 I1231 18:37:40.332187 27118 solver.cpp:245] Train net output #0: loss = 0.633592 (* 1 = 0.633592 loss) I1231 18:37:40.332203 27118 sgd_solver.cpp:106] Iteration 150, lr = 0.001 I1231 18:37:44.694074 27118 solver.cpp:338] Iteration 200, Testing net (#0) I1231 18:37:47.638283 27118 blocking_queue.cpp:50] Data layer prefetch queue empty I1231 18:37:53.485548 27118 blocking_queue.cpp:50] Data layer prefetch queue empty I1231 18:37:53.628880 27118 solver.cpp:406] Test net output #0: accuracy = 0.666667 I1231 18:37:53.629004 27118 solver.cpp:406] Test net output #1: loss = 0.562988 (* 1 = 0.562988 loss)
I tried it for 1000 iterations and test the net for each 100 iteration. And the accuracy is the same while the loss is decaying.
Does some one know why it is happening?
The text was updated successfully, but these errors were encountered: