Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A Classification Problem with Trained Model #1391

Closed
LawBow opened this issue Nov 1, 2014 · 6 comments
Closed

A Classification Problem with Trained Model #1391

LawBow opened this issue Nov 1, 2014 · 6 comments

Comments

@LawBow
Copy link

LawBow commented Nov 1, 2014

I build a 250 categories training dataset, each category has more than 10k images. Following the Imagenet training steps, I got 92% accuracy in test.

I1101 21:19:04.669459 19693 solver.cpp:247] Iteration 13000, Testing net (#0)
I1101 21:20:40.372087 19693 solver.cpp:298]     Test net output #0: accuracy = 0.92135
I1101 21:20:40.372143 19693 solver.cpp:298]     Test net output #1: loss = 0.290574 (* 1 = 0.290574 loss)
I1101 21:20:41.380187 19693 solver.cpp:191] Iteration 13000, loss = 0.310368
I1101 21:20:41.380216 19693 solver.cpp:206]     Train net output #0: loss = 0.310368 (* 1 = 0.310368 loss)
I1101 21:20:41.380236 19693 solver.cpp:403] Iteration 13000, lr = 0.01

I use such trained model and test valid dataset. the valid dataset random selected from database, so the data distribution is the same with training and test data. I also subtract the trained dataset mean file. But the test result is 44%, I don't know what's the problem?

net = caffe.Classifier(MODEL_FILE, PRETRAINED,
                       mean=np.load(mean_file),
                       raw_scale=255,
                       image_dims=(64, 64))
@dabilied
Copy link

dabilied commented Nov 2, 2014

overfit,simpler net may work

@dabilied
Copy link

dabilied commented Nov 3, 2014

oh sorry ,if you use valid dataset for test in training , the problem seems just occurs during testing procedure,you can try to test in C++.

@PatWie
Copy link
Contributor

PatWie commented Nov 3, 2014

You validation set cannot have the same distributions. Errors can be:

  • double subtracting the mean image
  • wrong image dimensions (wrong part of the image for testing)
  • wrong model (double check path)?
  • input_dim in your deploy-net-layout

I do not use the python interface for some reasons. You can try to print the prediction within caffe. Adjust the solver.prototxt:

  • set learning_rate=0
  • set max_iter=1
  • change path to your validation leveldb

And instead of the softmax__loss_ layer in your train_val.prototxt:

layers {
  name: "loss"
  type: SOFTMAX_LOSS
  bottom: "full_layer_002"
  bottom: "label"
  top: "loss"
}

you can use a softmax-layer and:
https://github.com/PatWie/caffe/blob/teststatistic/src/caffe/layers/statistic_layer.cpp
which simply prints out the predictions.

layers {
  name: "prob"
  type: SOFTMAX
  bottom: "full_layer_002"
  top: "prob"
}
layers {
  name: "stats"
  type: TESTSTATISTIC
  bottom: "prob"
  bottom: "label"
  top: "stats"
}

The output will be

// layout of line
[>>] true_label prob_0 label_0 prob_1 label_1 prob_2 label_2 prob_3 label_3

use

caffe train --solver=YOUR_SOLVER --weights=YOUR_MODEL &> run_plot.log

Then simply

grep "\[>>\]" run_plot.log | tail -n 800 |  sed 's/\[>>\] //g'  > prediction_test.csv

if your validation set has 800 images. You may compute the correct error from there.

@tjusxh
Copy link

tjusxh commented Nov 14, 2014

@LawBow Hi,
Could you help me?I come across one question which there are 160 dimension in the full connection layer and after softmax 2000+ categories.Could it work?
Thanks
tjusxh

@shelhamer
Copy link
Member

@PatWie's summary is good. These problems will go away once #1245 is addressed.

@sulthanashafi
Copy link

iam also in the similar situation i tried changing augmentation as mentioned but haven't worked .Please someone help solve it.i may attach the few lkstlines of log file here.I0413 06:08:56.441606 22206 net.cpp:159] Memory required for data: 5099532756
I0413 06:08:56.441609 22206 net.cpp:222] mAP does not need backward computation.
I0413 06:08:56.441613 22206 net.cpp:222] score does not need backward computation.
I0413 06:08:56.441617 22206 net.cpp:222] cluster_gt does not need backward computation.
I0413 06:08:56.441622 22206 net.cpp:222] cluster does not need backward computation.
I0413 06:08:56.441649 22206 net.cpp:220] coverage_loss needs backward computation.
I0413 06:08:56.441654 22206 net.cpp:220] bbox_loss needs backward computation.
I0413 06:08:56.441659 22206 net.cpp:220] bbox-obj-norm needs backward computation.
I0413 06:08:56.441663 22206 net.cpp:220] bbox-norm needs backward computation.
I0413 06:08:56.441668 22206 net.cpp:220] bbox_mask needs backward computation.
I0413 06:08:56.441673 22206 net.cpp:220] bboxes_bbox/regressor_0_split needs backward computation.
I0413 06:08:56.441678 22206 net.cpp:220] bbox/regressor needs backward computation.
I0413 06:08:56.441682 22206 net.cpp:220] coverage_coverage/sig_0_split needs backward computation.
I0413 06:08:56.441686 22206 net.cpp:220] coverage/sig needs backward computation.
I0413 06:08:56.441690 22206 net.cpp:220] cvg/classifier needs backward computation.
I0413 06:08:56.441695 22206 net.cpp:220] pool5/drop_s1_pool5/drop_s1_0_split needs backward computation.
I0413 06:08:56.441699 22206 net.cpp:220] pool5/drop_s1 needs backward computation.
I0413 06:08:56.441704 22206 net.cpp:220] inception_5b/output needs backward computation.
I0413 06:08:56.441709 22206 net.cpp:220] inception_5b/relu_pool_proj needs backward computation.
I0413 06:08:56.441712 22206 net.cpp:220] inception_5b/pool_proj needs backward computation.
I0413 06:08:56.441716 22206 net.cpp:220] inception_5b/pool needs backward computation.
I0413 06:08:56.441721 22206 net.cpp:220] inception_5b/relu_5x5 needs backward computation.
I0413 06:08:56.441725 22206 net.cpp:220] inception_5b/5x5 needs backward computation.
I0413 06:08:56.441730 22206 net.cpp:220] inception_5b/relu_5x5_reduce needs backward computation.
I0413 06:08:56.441733 22206 net.cpp:220] inception_5b/5x5_reduce needs backward computation.
I0413 06:08:56.441737 22206 net.cpp:220] inception_5b/relu_3x3 needs backward computation.
I0413 06:08:56.441741 22206 net.cpp:220] inception_5b/3x3 needs backward computation.
I0413 06:08:56.441745 22206 net.cpp:220] inception_5b/relu_3x3_reduce needs backward computation.
I0413 06:08:56.441750 22206 net.cpp:220] inception_5b/3x3_reduce needs backward computation.
I0413 06:08:56.441753 22206 net.cpp:220] inception_5b/relu_1x1 needs backward computation.
I0413 06:08:56.441756 22206 net.cpp:220] inception_5b/1x1 needs backward computation.
I0413 06:08:56.441761 22206 net.cpp:220] inception_5a/output_inception_5a/output_0_split needs backward computation.
I0413 06:08:56.441766 22206 net.cpp:220] inception_5a/output needs backward computation.
I0413 06:08:56.441771 22206 net.cpp:220] inception_5a/relu_pool_proj needs backward computation.
I0413 06:08:56.441774 22206 net.cpp:220] inception_5a/pool_proj needs backward computation.
I0413 06:08:56.441778 22206 net.cpp:220] inception_5a/pool needs backward computation.
I0413 06:08:56.441782 22206 net.cpp:220] inception_5a/relu_5x5 needs backward computation.
I0413 06:08:56.441787 22206 net.cpp:220] inception_5a/5x5 needs backward computation.
I0413 06:08:56.441790 22206 net.cpp:220] inception_5a/relu_5x5_reduce needs backward computation.
I0413 06:08:56.441794 22206 net.cpp:220] inception_5a/5x5_reduce needs backward computation.
I0413 06:08:56.441798 22206 net.cpp:220] inception_5a/relu_3x3 needs backward computation.
I0413 06:08:56.441802 22206 net.cpp:220] inception_5a/3x3 needs backward computation.
I0413 06:08:56.441805 22206 net.cpp:220] inception_5a/relu_3x3_reduce needs backward computation.
I0413 06:08:56.441809 22206 net.cpp:220] inception_5a/3x3_reduce needs backward computation.
I0413 06:08:56.441813 22206 net.cpp:220] inception_5a/relu_1x1 needs backward computation.
I0413 06:08:56.441817 22206 net.cpp:220] inception_5a/1x1 needs backward computation.
I0413 06:08:56.441820 22206 net.cpp:220] inception_4e/output_inception_4e/output_0_split needs backward computation.
I0413 06:08:56.441825 22206 net.cpp:220] inception_4e/output needs backward computation.
I0413 06:08:56.441830 22206 net.cpp:220] inception_4e/relu_pool_proj needs backward computation.
I0413 06:08:56.441834 22206 net.cpp:220] inception_4e/pool_proj needs backward computation.
I0413 06:08:56.441844 22206 net.cpp:220] inception_4e/pool needs backward computation.
I0413 06:08:56.441848 22206 net.cpp:220] inception_4e/relu_5x5 needs backward computation.
I0413 06:08:56.441853 22206 net.cpp:220] inception_4e/5x5 needs backward computation.
I0413 06:08:56.441856 22206 net.cpp:220] inception_4e/relu_5x5_reduce needs backward computation.
I0413 06:08:56.441860 22206 net.cpp:220] inception_4e/5x5_reduce needs backward computation.
I0413 06:08:56.441864 22206 net.cpp:220] inception_4e/relu_3x3 needs backward computation.
I0413 06:08:56.441867 22206 net.cpp:220] inception_4e/3x3 needs backward computation.
I0413 06:08:56.441871 22206 net.cpp:220] inception_4e/relu_3x3_reduce needs backward computation.
I0413 06:08:56.441875 22206 net.cpp:220] inception_4e/3x3_reduce needs backward computation.
I0413 06:08:56.441879 22206 net.cpp:220] inception_4e/relu_1x1 needs backward computation.
I0413 06:08:56.441884 22206 net.cpp:220] inception_4e/1x1 needs backward computation.
I0413 06:08:56.441887 22206 net.cpp:220] inception_4d/output_inception_4d/output_0_split needs backward computation.
I0413 06:08:56.441891 22206 net.cpp:220] inception_4d/output needs backward computation.
I0413 06:08:56.441896 22206 net.cpp:220] inception_4d/relu_pool_proj needs backward computation.
I0413 06:08:56.441900 22206 net.cpp:220] inception_4d/pool_proj needs backward computation.
I0413 06:08:56.441905 22206 net.cpp:220] inception_4d/pool needs backward computation.
I0413 06:08:56.441908 22206 net.cpp:220] inception_4d/relu_5x5 needs backward computation.
I0413 06:08:56.441912 22206 net.cpp:220] inception_4d/5x5 needs backward computation.
I0413 06:08:56.441916 22206 net.cpp:220] inception_4d/relu_5x5_reduce needs backward computation.
I0413 06:08:56.441920 22206 net.cpp:220] inception_4d/5x5_reduce needs backward computation.
I0413 06:08:56.441925 22206 net.cpp:220] inception_4d/relu_3x3 needs backward computation.
I0413 06:08:56.441927 22206 net.cpp:220] inception_4d/3x3 needs backward computation.
I0413 06:08:56.441931 22206 net.cpp:220] inception_4d/relu_3x3_reduce needs backward computation.
I0413 06:08:56.441934 22206 net.cpp:220] inception_4d/3x3_reduce needs backward computation.
I0413 06:08:56.441938 22206 net.cpp:220] inception_4d/relu_1x1 needs backward computation.
I0413 06:08:56.441942 22206 net.cpp:220] inception_4d/1x1 needs backward computation.
I0413 06:08:56.441946 22206 net.cpp:220] inception_4c/output_inception_4c/output_0_split needs backward computation.
I0413 06:08:56.441951 22206 net.cpp:220] inception_4c/output needs backward computation.
I0413 06:08:56.441956 22206 net.cpp:220] inception_4c/relu_pool_proj needs backward computation.
I0413 06:08:56.441958 22206 net.cpp:220] inception_4c/pool_proj needs backward computation.
I0413 06:08:56.441962 22206 net.cpp:220] inception_4c/pool needs backward computation.
I0413 06:08:56.441967 22206 net.cpp:220] inception_4c/relu_5x5 needs backward computation.
I0413 06:08:56.441969 22206 net.cpp:220] inception_4c/5x5 needs backward computation.
I0413 06:08:56.441973 22206 net.cpp:220] inception_4c/relu_5x5_reduce needs backward computation.
I0413 06:08:56.441977 22206 net.cpp:220] inception_4c/5x5_reduce needs backward computation.
I0413 06:08:56.441982 22206 net.cpp:220] inception_4c/relu_3x3 needs backward computation.
I0413 06:08:56.441985 22206 net.cpp:220] inception_4c/3x3 needs backward computation.
I0413 06:08:56.441988 22206 net.cpp:220] inception_4c/relu_3x3_reduce needs backward computation.
I0413 06:08:56.441992 22206 net.cpp:220] inception_4c/3x3_reduce needs backward computation.
I0413 06:08:56.441997 22206 net.cpp:220] inception_4c/relu_1x1 needs backward computation.
I0413 06:08:56.441999 22206 net.cpp:220] inception_4c/1x1 needs backward computation.
I0413 06:08:56.442003 22206 net.cpp:220] inception_4b/output_inception_4b/output_0_split needs backward computation.
I0413 06:08:56.442008 22206 net.cpp:220] inception_4b/output needs backward computation.
I0413 06:08:56.442013 22206 net.cpp:220] inception_4b/relu_pool_proj needs backward computation.
I0413 06:08:56.442020 22206 net.cpp:220] inception_4b/pool_proj needs backward computation.
I0413 06:08:56.442024 22206 net.cpp:220] inception_4b/pool needs backward computation.
I0413 06:08:56.442028 22206 net.cpp:220] inception_4b/relu_5x5 needs backward computation.
I0413 06:08:56.442032 22206 net.cpp:220] inception_4b/5x5 needs backward computation.
I0413 06:08:56.442036 22206 net.cpp:220] inception_4b/relu_5x5_reduce needs backward computation.
I0413 06:08:56.442039 22206 net.cpp:220] inception_4b/5x5_reduce needs backward computation.
I0413 06:08:56.442044 22206 net.cpp:220] inception_4b/relu_3x3 needs backward computation.
I0413 06:08:56.442047 22206 net.cpp:220] inception_4b/3x3 needs backward computation.
I0413 06:08:56.442050 22206 net.cpp:220] inception_4b/relu_3x3_reduce needs backward computation.
I0413 06:08:56.442054 22206 net.cpp:220] inception_4b/3x3_reduce needs backward computation.
I0413 06:08:56.442059 22206 net.cpp:220] inception_4b/relu_1x1 needs backward computation.
I0413 06:08:56.442062 22206 net.cpp:220] inception_4b/1x1 needs backward computation.
I0413 06:08:56.442066 22206 net.cpp:220] inception_4a/output_inception_4a/output_0_split needs backward computation.
I0413 06:08:56.442070 22206 net.cpp:220] inception_4a/output needs backward computation.
I0413 06:08:56.442075 22206 net.cpp:220] inception_4a/relu_pool_proj needs backward computation.
I0413 06:08:56.442078 22206 net.cpp:220] inception_4a/pool_proj needs backward computation.
I0413 06:08:56.442082 22206 net.cpp:220] inception_4a/pool needs backward computation.
I0413 06:08:56.442086 22206 net.cpp:220] inception_4a/relu_5x5 needs backward computation.
I0413 06:08:56.442090 22206 net.cpp:220] inception_4a/5x5 needs backward computation.
I0413 06:08:56.442095 22206 net.cpp:220] inception_4a/relu_5x5_reduce needs backward computation.
I0413 06:08:56.442097 22206 net.cpp:220] inception_4a/5x5_reduce needs backward computation.
I0413 06:08:56.442101 22206 net.cpp:220] inception_4a/relu_3x3 needs backward computation.
I0413 06:08:56.442106 22206 net.cpp:220] inception_4a/3x3 needs backward computation.
I0413 06:08:56.442109 22206 net.cpp:220] inception_4a/relu_3x3_reduce needs backward computation.
I0413 06:08:56.442112 22206 net.cpp:220] inception_4a/3x3_reduce needs backward computation.
I0413 06:08:56.442116 22206 net.cpp:220] inception_4a/relu_1x1 needs backward computation.
I0413 06:08:56.442121 22206 net.cpp:220] inception_4a/1x1 needs backward computation.
I0413 06:08:56.442124 22206 net.cpp:220] pool3/3x3_s2_pool3/3x3_s2_0_split needs backward computation.
I0413 06:08:56.442128 22206 net.cpp:220] pool3/3x3_s2 needs backward computation.
I0413 06:08:56.442132 22206 net.cpp:220] inception_3b/output needs backward computation.
I0413 06:08:56.442138 22206 net.cpp:220] inception_3b/relu_pool_proj needs backward computation.
I0413 06:08:56.442142 22206 net.cpp:220] inception_3b/pool_proj needs backward computation.
I0413 06:08:56.442147 22206 net.cpp:220] inception_3b/pool needs backward computation.
I0413 06:08:56.442150 22206 net.cpp:220] inception_3b/relu_5x5 needs backward computation.
I0413 06:08:56.442154 22206 net.cpp:220] inception_3b/5x5 needs backward computation.
I0413 06:08:56.442157 22206 net.cpp:220] inception_3b/relu_5x5_reduce needs backward computation.
I0413 06:08:56.442162 22206 net.cpp:220] inception_3b/5x5_reduce needs backward computation.
I0413 06:08:56.442165 22206 net.cpp:220] inception_3b/relu_3x3 needs backward computation.
I0413 06:08:56.442169 22206 net.cpp:220] inception_3b/3x3 needs backward computation.
I0413 06:08:56.442173 22206 net.cpp:220] inception_3b/relu_3x3_reduce needs backward computation.
I0413 06:08:56.442178 22206 net.cpp:220] inception_3b/3x3_reduce needs backward computation.
I0413 06:08:56.442181 22206 net.cpp:220] inception_3b/relu_1x1 needs backward computation.
I0413 06:08:56.442185 22206 net.cpp:220] inception_3b/1x1 needs backward computation.
I0413 06:08:56.442189 22206 net.cpp:220] inception_3a/output_inception_3a/output_0_split needs backward computation.
I0413 06:08:56.442199 22206 net.cpp:220] inception_3a/output needs backward computation.
I0413 06:08:56.442204 22206 net.cpp:220] inception_3a/relu_pool_proj needs backward computation.
I0413 06:08:56.442209 22206 net.cpp:220] inception_3a/pool_proj needs backward computation.
I0413 06:08:56.442212 22206 net.cpp:220] inception_3a/pool needs backward computation.
I0413 06:08:56.442216 22206 net.cpp:220] inception_3a/relu_5x5 needs backward computation.
I0413 06:08:56.442220 22206 net.cpp:220] inception_3a/5x5 needs backward computation.
I0413 06:08:56.442224 22206 net.cpp:220] inception_3a/relu_5x5_reduce needs backward computation.
I0413 06:08:56.442229 22206 net.cpp:220] inception_3a/5x5_reduce needs backward computation.
I0413 06:08:56.442232 22206 net.cpp:220] inception_3a/relu_3x3 needs backward computation.
I0413 06:08:56.442235 22206 net.cpp:220] inception_3a/3x3 needs backward computation.
I0413 06:08:56.442239 22206 net.cpp:220] inception_3a/relu_3x3_reduce needs backward computation.
I0413 06:08:56.442243 22206 net.cpp:220] inception_3a/3x3_reduce needs backward computation.
I0413 06:08:56.442247 22206 net.cpp:220] inception_3a/relu_1x1 needs backward computation.
I0413 06:08:56.442251 22206 net.cpp:220] inception_3a/1x1 needs backward computation.
I0413 06:08:56.442255 22206 net.cpp:220] pool2/3x3_s2_pool2/3x3_s2_0_split needs backward computation.
I0413 06:08:56.442260 22206 net.cpp:220] pool2/3x3_s2 needs backward computation.
I0413 06:08:56.442265 22206 net.cpp:220] conv2/norm2 needs backward computation.
I0413 06:08:56.442268 22206 net.cpp:220] conv2/relu_3x3 needs backward computation.
I0413 06:08:56.442272 22206 net.cpp:220] conv2/3x3 needs backward computation.
I0413 06:08:56.442276 22206 net.cpp:220] conv2/relu_3x3_reduce needs backward computation.
I0413 06:08:56.442281 22206 net.cpp:220] conv2/3x3_reduce needs backward computation.
I0413 06:08:56.442284 22206 net.cpp:220] pool1/norm1 needs backward computation.
I0413 06:08:56.442289 22206 net.cpp:220] pool1/3x3_s2 needs backward computation.
I0413 06:08:56.442293 22206 net.cpp:220] conv1/relu_7x7 needs backward computation.
I0413 06:08:56.442297 22206 net.cpp:220] conv1/7x7_s2 needs backward computation.
I0413 06:08:56.442301 22206 net.cpp:222] bb-obj-norm does not need backward computation.
I0413 06:08:56.442307 22206 net.cpp:222] bb-label-norm does not need backward computation.
I0413 06:08:56.442313 22206 net.cpp:222] obj-block_obj-block_0_split does not need backward computation.
I0413 06:08:56.442318 22206 net.cpp:222] obj-block does not need backward computation.
I0413 06:08:56.442325 22206 net.cpp:222] size-block_size-block_0_split does not need backward computation.
I0413 06:08:56.442329 22206 net.cpp:222] size-block does not need backward computation.
I0413 06:08:56.442335 22206 net.cpp:222] coverage-block does not need backward computation.
I0413 06:08:56.442342 22206 net.cpp:222] coverage-label_slice-label_4_split does not need backward computation.
I0413 06:08:56.442348 22206 net.cpp:222] obj-label_slice-label_3_split does not need backward computation.
I0413 06:08:56.442353 22206 net.cpp:222] size-label_slice-label_2_split does not need backward computation.
I0413 06:08:56.442358 22206 net.cpp:222] bbox-label_slice-label_1_split does not need backward computation.
I0413 06:08:56.442364 22206 net.cpp:222] foreground-label_slice-label_0_split does not need backward computation.
I0413 06:08:56.442370 22206 net.cpp:222] slice-label does not need backward computation.
I0413 06:08:56.442375 22206 net.cpp:222] val_transform does not need backward computation.
I0413 06:08:56.442380 22206 net.cpp:222] val_label does not need backward computation.
I0413 06:08:56.442384 22206 net.cpp:222] val_data does not need backward computation.
I0413 06:08:56.442387 22206 net.cpp:264] This network produces output loss_bbox
I0413 06:08:56.442391 22206 net.cpp:264] This network produces output loss_coverage
I0413 06:08:56.442395 22206 net.cpp:264] This network produces output mAP
I0413 06:08:56.442399 22206 net.cpp:264] This network produces output precision
I0413 06:08:56.442409 22206 net.cpp:264] This network produces output recall
I0413 06:08:56.442546 22206 net.cpp:284] Network initialization done.
I0413 06:08:56.443408 22206 solver.cpp:60] Solver scaffolding done.
I0413 06:08:56.448447 22206 caffe.cpp:231] Starting Optimization
I0413 06:08:56.448458 22206 solver.cpp:304] Solving
I0413 06:08:56.448462 22206 solver.cpp:305] Learning Rate Policy: step
I0413 06:08:56.454710 22206 solver.cpp:362] Iteration 0, Testing net (#0)
I0413 06:08:56.454725 22206 net.cpp:723] Ignoring source layer train_data
I0413 06:08:56.454730 22206 net.cpp:723] Ignoring source layer train_label
I0413 06:08:56.454733 22206 net.cpp:723] Ignoring source layer train_transform
I0413 06:09:23.007010 22206 solver.cpp:429] Test net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 06:09:23.007134 22206 solver.cpp:429] Test net output #1: loss_coverage = 305.735 (* 1 = 305.735 loss)
I0413 06:09:23.007150 22206 solver.cpp:429] Test net output #2: mAP = 0
I0413 06:09:23.007155 22206 solver.cpp:429] Test net output #3: precision = 0
I0413 06:09:23.007159 22206 solver.cpp:429] Test net output #4: recall = 0
I0413 06:09:40.952916 22206 solver.cpp:242] Iteration 0 (0 iter/s, 44.5051s/40 iter), loss = 317.739
I0413 06:09:40.952960 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 06:09:40.952968 22206 solver.cpp:261] Train net output #1: loss_coverage = 318.719 (* 1 = 318.719 loss)
I0413 06:09:40.952993 22206 sgd_solver.cpp:106] Iteration 0, lr = 0.001
I0413 06:12:09.236304 22206 solver.cpp:242] Iteration 40 (0.26975 iter/s, 148.286s/40 iter), loss = -6.43825e-20
I0413 06:12:09.236418 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 06:12:09.236426 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 06:12:09.236436 22206 sgd_solver.cpp:106] Iteration 40, lr = 0.001
I0413 06:14:37.587092 22206 solver.cpp:242] Iteration 80 (0.269627 iter/s, 148.353s/40 iter), loss = -6.43825e-20
I0413 06:14:37.587157 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 06:14:37.587165 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 06:14:37.587175 22206 sgd_solver.cpp:106] Iteration 80, lr = 0.001
I0413 06:17:05.949126 22206 solver.cpp:242] Iteration 120 (0.269607 iter/s, 148.364s/40 iter), loss = -6.43825e-20
I0413 06:17:05.949256 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 06:17:05.949266 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 06:17:05.949278 22206 sgd_solver.cpp:106] Iteration 120, lr = 0.001
I0413 06:19:34.343139 22206 solver.cpp:242] Iteration 160 (0.269549 iter/s, 148.396s/40 iter), loss = -6.43825e-20
I0413 06:19:34.343253 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 06:19:34.343263 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 06:19:34.343273 22206 sgd_solver.cpp:106] Iteration 160, lr = 0.001
I0413 06:22:02.687221 22206 solver.cpp:242] Iteration 200 (0.269639 iter/s, 148.346s/40 iter), loss = -6.43825e-20
I0413 06:22:02.687297 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 06:22:02.687307 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 06:22:02.687319 22206 sgd_solver.cpp:106] Iteration 200, lr = 0.001
I0413 06:24:31.148965 22206 solver.cpp:242] Iteration 240 (0.269426 iter/s, 148.464s/40 iter), loss = -6.43825e-20
I0413 06:24:31.149034 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 06:24:31.149042 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 06:24:31.149052 22206 sgd_solver.cpp:106] Iteration 240, lr = 0.001
I0413 06:26:59.616809 22206 solver.cpp:242] Iteration 280 (0.269415 iter/s, 148.47s/40 iter), loss = -6.43825e-20
I0413 06:26:59.616961 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 06:26:59.616971 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 06:26:59.616981 22206 sgd_solver.cpp:106] Iteration 280, lr = 0.001
I0413 06:29:27.940598 22206 solver.cpp:242] Iteration 320 (0.269676 iter/s, 148.326s/40 iter), loss = -6.43825e-20
I0413 06:29:27.940678 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 06:29:27.940688 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 06:29:27.940698 22206 sgd_solver.cpp:106] Iteration 320, lr = 0.001
I0413 06:29:31.628262 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_322.caffemodel
I0413 06:29:31.759322 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_322.solverstate
I0413 06:31:57.534430 22206 solver.cpp:242] Iteration 360 (0.267387 iter/s, 149.596s/40 iter), loss = -6.43825e-20
I0413 06:31:57.534548 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 06:31:57.534557 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 06:31:57.534569 22206 sgd_solver.cpp:106] Iteration 360, lr = 0.001
I0413 06:34:26.713977 22206 solver.cpp:242] Iteration 400 (0.268129 iter/s, 149.182s/40 iter), loss = -6.43825e-20
I0413 06:34:26.714052 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 06:34:26.714061 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 06:34:26.714072 22206 sgd_solver.cpp:106] Iteration 400, lr = 0.001
I0413 06:36:56.036567 22206 solver.cpp:242] Iteration 440 (0.267872 iter/s, 149.325s/40 iter), loss = -6.43825e-20
I0413 06:36:56.036634 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 06:36:56.036643 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 06:36:56.036653 22206 sgd_solver.cpp:106] Iteration 440, lr = 0.001
I0413 06:39:25.477144 22206 solver.cpp:242] Iteration 480 (0.267661 iter/s, 149.443s/40 iter), loss = -6.43825e-20
I0413 06:39:25.477215 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 06:39:25.477223 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 06:39:25.477234 22206 sgd_solver.cpp:106] Iteration 480, lr = 0.001
I0413 06:41:54.831908 22206 solver.cpp:242] Iteration 520 (0.267815 iter/s, 149.357s/40 iter), loss = -6.43825e-20
I0413 06:41:54.831998 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 06:41:54.832008 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 06:41:54.832020 22206 sgd_solver.cpp:106] Iteration 520, lr = 0.001
I0413 06:44:24.388000 22206 solver.cpp:242] Iteration 560 (0.267454 iter/s, 149.558s/40 iter), loss = -6.43825e-20
I0413 06:44:24.388065 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 06:44:24.388074 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 06:44:24.388085 22206 sgd_solver.cpp:106] Iteration 560, lr = 0.001
I0413 06:46:53.393358 22206 solver.cpp:242] Iteration 600 (0.268443 iter/s, 149.008s/40 iter), loss = -6.43825e-20
I0413 06:46:53.393478 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 06:46:53.393488 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 06:46:53.393498 22206 sgd_solver.cpp:106] Iteration 600, lr = 0.001
I0413 06:49:22.734746 22206 solver.cpp:242] Iteration 640 (0.267839 iter/s, 149.344s/40 iter), loss = -6.43825e-20
I0413 06:49:22.734871 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 06:49:22.734881 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 06:49:22.734892 22206 sgd_solver.cpp:106] Iteration 640, lr = 0.001
I0413 06:49:33.901051 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_644.caffemodel
I0413 06:49:33.993005 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_644.solverstate
I0413 06:51:52.247838 22206 solver.cpp:242] Iteration 680 (0.267531 iter/s, 149.515s/40 iter), loss = -6.43825e-20
I0413 06:51:52.248011 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 06:51:52.248023 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 06:51:52.248035 22206 sgd_solver.cpp:106] Iteration 680, lr = 0.001
I0413 06:54:21.442358 22206 solver.cpp:242] Iteration 720 (0.268103 iter/s, 149.197s/40 iter), loss = -6.43825e-20
I0413 06:54:21.442461 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 06:54:21.442471 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 06:54:21.442482 22206 sgd_solver.cpp:106] Iteration 720, lr = 0.001
I0413 06:56:50.671871 22206 solver.cpp:242] Iteration 760 (0.26804 iter/s, 149.232s/40 iter), loss = -6.43825e-20
I0413 06:56:50.671988 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 06:56:50.671998 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 06:56:50.672009 22206 sgd_solver.cpp:106] Iteration 760, lr = 0.001
I0413 06:59:19.819095 22206 solver.cpp:242] Iteration 800 (0.268188 iter/s, 149.149s/40 iter), loss = -6.43825e-20
I0413 06:59:19.819205 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 06:59:19.819213 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 06:59:19.819224 22206 sgd_solver.cpp:106] Iteration 800, lr = 0.001
I0413 07:01:49.744330 22206 solver.cpp:242] Iteration 840 (0.266796 iter/s, 149.927s/40 iter), loss = -6.43825e-20
I0413 07:01:49.744400 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:01:49.744408 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:01:49.744420 22206 sgd_solver.cpp:106] Iteration 840, lr = 0.001
I0413 07:04:18.976639 22206 solver.cpp:242] Iteration 880 (0.268035 iter/s, 149.234s/40 iter), loss = -6.43825e-20
I0413 07:04:18.976755 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:04:18.976765 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:04:18.976776 22206 sgd_solver.cpp:106] Iteration 880, lr = 0.001
I0413 07:06:48.090966 22206 solver.cpp:242] Iteration 920 (0.268247 iter/s, 149.116s/40 iter), loss = -6.43825e-20
I0413 07:06:48.091079 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:06:48.091089 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:06:48.091099 22206 sgd_solver.cpp:106] Iteration 920, lr = 0.001
I0413 07:09:17.116433 22206 solver.cpp:242] Iteration 960 (0.268407 iter/s, 149.028s/40 iter), loss = -6.43825e-20
I0413 07:09:17.116539 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:09:17.116549 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:09:17.116559 22206 sgd_solver.cpp:106] Iteration 960, lr = 0.001
I0413 07:09:35.753537 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_966.caffemodel
I0413 07:09:35.846139 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_966.solverstate
I0413 07:11:46.418128 22206 solver.cpp:242] Iteration 1000 (0.26791 iter/s, 149.304s/40 iter), loss = -6.43825e-20
I0413 07:11:46.418980 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:11:46.418990 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:11:46.419003 22206 sgd_solver.cpp:106] Iteration 1000, lr = 0.001
I0413 07:14:15.607887 22206 solver.cpp:242] Iteration 1040 (0.268112 iter/s, 149.191s/40 iter), loss = -6.43825e-20
I0413 07:14:15.608037 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:14:15.608047 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:14:15.608058 22206 sgd_solver.cpp:106] Iteration 1040, lr = 0.001
I0413 07:16:44.633111 22206 solver.cpp:242] Iteration 1080 (0.268407 iter/s, 149.027s/40 iter), loss = -6.43825e-20
I0413 07:16:44.633185 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:16:44.633195 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:16:44.633205 22206 sgd_solver.cpp:106] Iteration 1080, lr = 0.001
I0413 07:19:13.658380 22206 solver.cpp:242] Iteration 1120 (0.268407 iter/s, 149.027s/40 iter), loss = -6.43825e-20
I0413 07:19:13.658449 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:19:13.658458 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:19:13.658468 22206 sgd_solver.cpp:106] Iteration 1120, lr = 0.001
I0413 07:21:42.794762 22206 solver.cpp:242] Iteration 1160 (0.268207 iter/s, 149.139s/40 iter), loss = -6.43825e-20
I0413 07:21:42.794872 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:21:42.794881 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:21:42.794893 22206 sgd_solver.cpp:106] Iteration 1160, lr = 0.001
I0413 07:24:12.001484 22206 solver.cpp:242] Iteration 1200 (0.268081 iter/s, 149.209s/40 iter), loss = -6.43825e-20
I0413 07:24:12.001596 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:24:12.001606 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:24:12.001617 22206 sgd_solver.cpp:106] Iteration 1200, lr = 0.001
I0413 07:26:41.062177 22206 solver.cpp:242] Iteration 1240 (0.268343 iter/s, 149.063s/40 iter), loss = -6.43825e-20
I0413 07:26:41.062252 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:26:41.062261 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:26:41.062273 22206 sgd_solver.cpp:106] Iteration 1240, lr = 0.001
I0413 07:29:10.076431 22206 solver.cpp:242] Iteration 1280 (0.268427 iter/s, 149.016s/40 iter), loss = -6.43825e-20
I0413 07:29:10.076552 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:29:10.076562 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:29:10.076573 22206 sgd_solver.cpp:106] Iteration 1280, lr = 0.001
I0413 07:29:36.118050 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_1288.caffemodel
I0413 07:29:36.210352 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_1288.solverstate
I0413 07:31:39.303977 22206 solver.cpp:242] Iteration 1320 (0.268043 iter/s, 149.23s/40 iter), loss = -6.43825e-20
I0413 07:31:39.304044 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:31:39.304054 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:31:39.304064 22206 sgd_solver.cpp:106] Iteration 1320, lr = 0.001
I0413 07:34:10.779850 22206 solver.cpp:242] Iteration 1360 (0.264065 iter/s, 151.478s/40 iter), loss = -6.43825e-20
I0413 07:34:10.779914 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:34:10.779924 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:34:10.779934 22206 sgd_solver.cpp:106] Iteration 1360, lr = 0.001
I0413 07:36:40.017544 22206 solver.cpp:242] Iteration 1400 (0.268025 iter/s, 149.24s/40 iter), loss = -6.43825e-20
I0413 07:36:40.017660 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:36:40.017670 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:36:40.017683 22206 sgd_solver.cpp:106] Iteration 1400, lr = 0.001
I0413 07:39:09.247287 22206 solver.cpp:242] Iteration 1440 (0.268039 iter/s, 149.232s/40 iter), loss = -6.43825e-20
I0413 07:39:09.247429 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:39:09.247440 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:39:09.247452 22206 sgd_solver.cpp:106] Iteration 1440, lr = 0.001
I0413 07:41:38.391238 22206 solver.cpp:242] Iteration 1480 (0.268194 iter/s, 149.146s/40 iter), loss = -6.43825e-20
I0413 07:41:38.391366 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:41:38.391376 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:41:38.391387 22206 sgd_solver.cpp:106] Iteration 1480, lr = 0.001
I0413 07:44:07.598134 22206 solver.cpp:242] Iteration 1520 (0.26808 iter/s, 149.209s/40 iter), loss = -6.43825e-20
I0413 07:44:07.599515 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:44:07.599524 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:44:07.599535 22206 sgd_solver.cpp:106] Iteration 1520, lr = 0.001
I0413 07:46:36.945183 22206 solver.cpp:242] Iteration 1560 (0.267831 iter/s, 149.348s/40 iter), loss = -6.43825e-20
I0413 07:46:36.945298 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:46:36.945309 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:46:36.945320 22206 sgd_solver.cpp:106] Iteration 1560, lr = 0.001
I0413 07:49:06.170992 22206 solver.cpp:242] Iteration 1600 (0.268046 iter/s, 149.228s/40 iter), loss = -6.43825e-20
I0413 07:49:06.171119 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:49:06.171129 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:49:06.171140 22206 sgd_solver.cpp:106] Iteration 1600, lr = 0.001
I0413 07:49:39.681165 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_1610.caffemodel
I0413 07:49:39.773512 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_1610.solverstate
I0413 07:49:39.850608 22206 solver.cpp:362] Iteration 1610, Testing net (#0)
I0413 07:49:39.850631 22206 net.cpp:723] Ignoring source layer train_data
I0413 07:49:39.850636 22206 net.cpp:723] Ignoring source layer train_label
I0413 07:49:39.850641 22206 net.cpp:723] Ignoring source layer train_transform
I0413 07:49:58.498661 22206 solver.cpp:429] Test net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:49:58.498684 22206 solver.cpp:429] Test net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:49:58.498710 22206 solver.cpp:429] Test net output #2: mAP = 0
I0413 07:49:58.498716 22206 solver.cpp:429] Test net output #3: precision = 0
I0413 07:49:58.498721 22206 solver.cpp:429] Test net output #4: recall = 0
I0413 07:51:54.017457 22206 solver.cpp:242] Iteration 1640 (0.23831 iter/s, 167.849s/40 iter), loss = -6.43825e-20
I0413 07:51:54.017525 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:51:54.017534 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:51:54.017545 22206 sgd_solver.cpp:106] Iteration 1640, lr = 0.001
I0413 07:54:23.078280 22206 solver.cpp:242] Iteration 1680 (0.268343 iter/s, 149.063s/40 iter), loss = -6.43825e-20
I0413 07:54:23.078408 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:54:23.078419 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:54:23.078431 22206 sgd_solver.cpp:106] Iteration 1680, lr = 0.001
I0413 07:56:52.164952 22206 solver.cpp:242] Iteration 1720 (0.268297 iter/s, 149.089s/40 iter), loss = -6.43825e-20
I0413 07:56:52.165076 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:56:52.165087 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:56:52.165098 22206 sgd_solver.cpp:106] Iteration 1720, lr = 0.001
I0413 07:59:21.380599 22206 solver.cpp:242] Iteration 1760 (0.268065 iter/s, 149.218s/40 iter), loss = -6.43825e-20
I0413 07:59:21.380702 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 07:59:21.380712 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 07:59:21.380722 22206 sgd_solver.cpp:106] Iteration 1760, lr = 0.001
I0413 08:01:50.508353 22206 solver.cpp:242] Iteration 1800 (0.268223 iter/s, 149.13s/40 iter), loss = -6.43825e-20
I0413 08:01:50.508491 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:01:50.508500 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:01:50.508513 22206 sgd_solver.cpp:106] Iteration 1800, lr = 0.001
I0413 08:04:19.601210 22206 solver.cpp:242] Iteration 1840 (0.268285 iter/s, 149.095s/40 iter), loss = -6.43825e-20
I0413 08:04:19.601328 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:04:19.601339 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:04:19.601351 22206 sgd_solver.cpp:106] Iteration 1840, lr = 0.001
I0413 08:06:48.672987 22206 solver.cpp:242] Iteration 1880 (0.268323 iter/s, 149.074s/40 iter), loss = -6.43825e-20
I0413 08:06:48.673053 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:06:48.673063 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:06:48.673074 22206 sgd_solver.cpp:106] Iteration 1880, lr = 0.001
I0413 08:09:17.874629 22206 solver.cpp:242] Iteration 1920 (0.26809 iter/s, 149.204s/40 iter), loss = -6.43825e-20
I0413 08:09:17.874747 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:09:17.874758 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:09:17.874768 22206 sgd_solver.cpp:106] Iteration 1920, lr = 0.001
I0413 08:09:58.770267 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_1932.caffemodel
I0413 08:09:58.862397 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_1932.solverstate
I0413 08:11:46.843391 22206 solver.cpp:242] Iteration 1960 (0.268509 iter/s, 148.971s/40 iter), loss = -6.43825e-20
I0413 08:11:46.843502 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:11:46.843513 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:11:46.843523 22206 sgd_solver.cpp:106] Iteration 1960, lr = 0.001
I0413 08:14:15.769006 22206 solver.cpp:242] Iteration 2000 (0.268587 iter/s, 148.928s/40 iter), loss = -6.43825e-20
I0413 08:14:15.769112 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:14:15.769122 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:14:15.769134 22206 sgd_solver.cpp:106] Iteration 2000, lr = 0.001
I0413 08:16:44.733536 22206 solver.cpp:242] Iteration 2040 (0.268516 iter/s, 148.967s/40 iter), loss = -6.43825e-20
I0413 08:16:44.733660 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:16:44.733671 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:16:44.733681 22206 sgd_solver.cpp:106] Iteration 2040, lr = 0.001
I0413 08:19:13.831907 22206 solver.cpp:242] Iteration 2080 (0.268275 iter/s, 149.1s/40 iter), loss = -6.43825e-20
I0413 08:19:13.832031 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:19:13.832041 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:19:13.832053 22206 sgd_solver.cpp:106] Iteration 2080, lr = 0.001
I0413 08:21:42.788041 22206 solver.cpp:242] Iteration 2120 (0.268532 iter/s, 148.958s/40 iter), loss = -6.43825e-20
I0413 08:21:42.788151 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:21:42.788161 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:21:42.788172 22206 sgd_solver.cpp:106] Iteration 2120, lr = 0.001
I0413 08:24:11.707514 22206 solver.cpp:242] Iteration 2160 (0.268598 iter/s, 148.922s/40 iter), loss = -6.43825e-20
I0413 08:24:11.707672 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:24:11.707684 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:24:11.707695 22206 sgd_solver.cpp:106] Iteration 2160, lr = 0.001
I0413 08:26:40.647996 22206 solver.cpp:242] Iteration 2200 (0.26856 iter/s, 148.943s/40 iter), loss = -6.43825e-20
I0413 08:26:40.648113 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:26:40.648124 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:26:40.648134 22206 sgd_solver.cpp:106] Iteration 2200, lr = 0.001
I0413 08:29:09.702431 22206 solver.cpp:242] Iteration 2240 (0.268355 iter/s, 149.057s/40 iter), loss = -6.43825e-20
I0413 08:29:09.702499 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:29:09.702508 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:29:09.702519 22206 sgd_solver.cpp:106] Iteration 2240, lr = 0.001
I0413 08:29:58.160796 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_2254.caffemodel
I0413 08:29:58.253464 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_2254.solverstate
I0413 08:31:38.717820 22206 solver.cpp:242] Iteration 2280 (0.268425 iter/s, 149.018s/40 iter), loss = -6.43825e-20
I0413 08:31:38.717890 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:31:38.717898 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:31:38.717908 22206 sgd_solver.cpp:106] Iteration 2280, lr = 0.001
I0413 08:34:07.770282 22206 solver.cpp:242] Iteration 2320 (0.268358 iter/s, 149.055s/40 iter), loss = -6.43825e-20
I0413 08:34:07.770356 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:34:07.770365 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:34:07.770376 22206 sgd_solver.cpp:106] Iteration 2320, lr = 0.001
I0413 08:36:36.791759 22206 solver.cpp:242] Iteration 2360 (0.268414 iter/s, 149.024s/40 iter), loss = -6.43825e-20
I0413 08:36:36.791877 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:36:36.791887 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:36:36.791899 22206 sgd_solver.cpp:106] Iteration 2360, lr = 0.001
I0413 08:39:05.874894 22206 solver.cpp:242] Iteration 2400 (0.268303 iter/s, 149.085s/40 iter), loss = -6.43825e-20
I0413 08:39:05.875010 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:39:05.875021 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:39:05.875032 22206 sgd_solver.cpp:106] Iteration 2400, lr = 0.001
I0413 08:41:34.985160 22206 solver.cpp:242] Iteration 2440 (0.268254 iter/s, 149.112s/40 iter), loss = -6.43825e-20
I0413 08:41:34.985235 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:41:34.985244 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:41:34.985255 22206 sgd_solver.cpp:106] Iteration 2440, lr = 0.001
I0413 08:44:03.968014 22206 solver.cpp:242] Iteration 2480 (0.268483 iter/s, 148.985s/40 iter), loss = -6.43825e-20
I0413 08:44:03.968082 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:44:03.968092 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:44:03.968103 22206 sgd_solver.cpp:106] Iteration 2480, lr = 0.001
I0413 08:46:33.126567 22206 solver.cpp:242] Iteration 2520 (0.268167 iter/s, 149.161s/40 iter), loss = -6.43825e-20
I0413 08:46:33.126674 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:46:33.126684 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:46:33.126695 22206 sgd_solver.cpp:106] Iteration 2520, lr = 0.001
I0413 08:49:02.207464 22206 solver.cpp:242] Iteration 2560 (0.268307 iter/s, 149.083s/40 iter), loss = -6.43825e-20
I0413 08:49:02.207599 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:49:02.207610 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:49:02.207622 22206 sgd_solver.cpp:106] Iteration 2560, lr = 0.001
I0413 08:49:58.178736 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_2576.caffemodel
I0413 08:49:58.270462 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_2576.solverstate
I0413 08:51:31.441325 22206 solver.cpp:242] Iteration 2600 (0.268032 iter/s, 149.236s/40 iter), loss = -6.43825e-20
I0413 08:51:31.441433 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:51:31.441444 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:51:31.441455 22206 sgd_solver.cpp:106] Iteration 2600, lr = 0.001
I0413 08:54:00.436144 22206 solver.cpp:242] Iteration 2640 (0.268462 iter/s, 148.997s/40 iter), loss = -6.43825e-20
I0413 08:54:00.436259 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:54:00.436269 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:54:00.436280 22206 sgd_solver.cpp:106] Iteration 2640, lr = 0.001
I0413 08:56:29.580874 22206 solver.cpp:242] Iteration 2680 (0.268192 iter/s, 149.147s/40 iter), loss = -6.43825e-20
I0413 08:56:29.580984 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:56:29.580994 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:56:29.581006 22206 sgd_solver.cpp:106] Iteration 2680, lr = 0.001
I0413 08:58:58.719501 22206 solver.cpp:242] Iteration 2720 (0.268203 iter/s, 149.141s/40 iter), loss = -6.43825e-20
I0413 08:58:58.719606 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 08:58:58.719616 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 08:58:58.719629 22206 sgd_solver.cpp:106] Iteration 2720, lr = 0.001
I0413 09:01:27.701104 22206 solver.cpp:242] Iteration 2760 (0.268486 iter/s, 148.984s/40 iter), loss = -6.43825e-20
I0413 09:01:27.701212 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 09:01:27.701221 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 09:01:27.701232 22206 sgd_solver.cpp:106] Iteration 2760, lr = 0.001
I0413 09:03:56.649909 22206 solver.cpp:242] Iteration 2800 (0.268545 iter/s, 148.951s/40 iter), loss = -6.43825e-20
I0413 09:03:56.650017 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 09:03:56.650027 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 09:03:56.650038 22206 sgd_solver.cpp:106] Iteration 2800, lr = 0.001
I0413 09:06:25.543249 22206 solver.cpp:242] Iteration 2840 (0.268645 iter/s, 148.895s/40 iter), loss = -6.43825e-20
I0413 09:06:25.543360 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 09:06:25.543370 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 09:06:25.543381 22206 sgd_solver.cpp:106] Iteration 2840, lr = 0.001
I0413 09:08:54.533068 22206 solver.cpp:242] Iteration 2880 (0.268471 iter/s, 148.992s/40 iter), loss = -6.43825e-20
I0413 09:08:54.533181 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 09:08:54.533191 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 09:08:54.533202 22206 sgd_solver.cpp:106] Iteration 2880, lr = 0.001
I0413 09:09:57.946005 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_2898.caffemodel
I0413 09:09:58.037713 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_2898.solverstate
I0413 09:11:23.805656 22206 solver.cpp:242] Iteration 2920 (0.267962 iter/s, 149.275s/40 iter), loss = -6.43825e-20
I0413 09:11:23.805763 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 09:11:23.805773 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 09:11:23.805783 22206 sgd_solver.cpp:106] Iteration 2920, lr = 0.001
I0413 09:13:52.782021 22206 solver.cpp:242] Iteration 2960 (0.268495 iter/s, 148.978s/40 iter), loss = -6.43825e-20
I0413 09:13:52.782095 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 09:13:52.782104 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 09:13:52.782114 22206 sgd_solver.cpp:106] Iteration 2960, lr = 0.001
I0413 09:16:21.811465 22206 solver.cpp:242] Iteration 3000 (0.268399 iter/s, 149.032s/40 iter), loss = -6.43825e-20
I0413 09:16:21.811580 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 09:16:21.811590 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 09:16:21.811601 22206 sgd_solver.cpp:106] Iteration 3000, lr = 0.001
I0413 09:18:50.943544 22206 solver.cpp:242] Iteration 3040 (0.268215 iter/s, 149.134s/40 iter), loss = -6.43825e-20
I0413 09:18:50.943651 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 09:18:50.943662 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 09:18:50.943673 22206 sgd_solver.cpp:106] Iteration 3040, lr = 0.001
I0413 09:21:20.150465 22206 solver.cpp:242] Iteration 3080 (0.26808 iter/s, 149.209s/40 iter), loss = -6.43825e-20
I0413 09:21:20.150588 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 09:21:20.150599 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 09:21:20.150609 22206 sgd_solver.cpp:106] Iteration 3080, lr = 0.001
I0413 09:23:49.286792 22206 solver.cpp:242] Iteration 3120 (0.268207 iter/s, 149.138s/40 iter), loss = -6.43825e-20
I0413 09:23:49.286911 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 09:23:49.286921 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 09:23:49.286932 22206 sgd_solver.cpp:106] Iteration 3120, lr = 0.001
I0413 09:26:18.334234 22206 solver.cpp:242] Iteration 3160 (0.268367 iter/s, 149.05s/40 iter), loss = -6.43825e-20
I0413 09:26:18.334349 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 09:26:18.334359 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 09:26:18.334370 22206 sgd_solver.cpp:106] Iteration 3160, lr = 0.001
I0413 09:28:47.519130 22206 solver.cpp:242] Iteration 3200 (0.26812 iter/s, 149.187s/40 iter), loss = -6.43825e-20
I0413 09:28:47.519201 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 09:28:47.519210 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 09:28:47.519222 22206 sgd_solver.cpp:106] Iteration 3200, lr = 0.0001
I0413 09:29:58.412418 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_3220.caffemodel
I0413 09:29:58.505291 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_3220.solverstate
I0413 09:29:58.578584 22206 solver.cpp:362] Iteration 3220, Testing net (#0)
I0413 09:29:58.578608 22206 net.cpp:723] Ignoring source layer train_data
I0413 09:29:58.578613 22206 net.cpp:723] Ignoring source layer train_label
I0413 09:29:58.578616 22206 net.cpp:723] Ignoring source layer train_transform
I0413 09:30:17.187198 22206 solver.cpp:429] Test net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 09:30:17.187222 22206 solver.cpp:429] Test net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 09:30:17.187227 22206 solver.cpp:429] Test net output #2: mAP = 0
I0413 09:30:17.187232 22206 solver.cpp:429] Test net output #3: precision = 0
I0413 09:30:17.187237 22206 solver.cpp:429] Test net output #4: recall = 0
I0413 09:31:35.329571 22206 solver.cpp:242] Iteration 3240 (0.238361 iter/s, 167.813s/40 iter), loss = = 0 (* 1 = 0 loss)
I0413 16:03:01.155478 22206 sgd_solver.cpp:106] Iteration 9520, lr = 1e-05
I0413 16:05:31.172293 22206 solver.cpp:242] Iteration 9560 (0.266633 iter/s, 150.019s/40 iter), loss = -6.43825e-20
I0413 16:05:31.172405 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 16:05:31.172415 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 16:05:31.172428 22206 sgd_solver.cpp:106] Iteration 9560, lr = 1e-05
I0413 16:08:01.182947 22206 solver.cpp:242] Iteration 9600 (0.266644 iter/s, 150.013s/40 iter), loss = -6.43825e-20
I0413 16:08:01.183090 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 16:08:01.183101 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 16:08:01.183115 22206 sgd_solver.cpp:106] Iteration 9600, lr = 1e-06
I0413 16:10:31.031286 22206 solver.cpp:242] Iteration 9640 (0.266933 iter/s, 149.85s/40 iter), loss = -6.43825e-20
I0413 16:10:31.031422 22206 solver.cpp:261] Train net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 16:10:31.031432 22206 solver.cpp:261] Train net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 16:10:31.031445 22206 sgd_solver.cpp:106] Iteration 9640, lr = 1e-06
I0413 16:11:42.178346 22206 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_9660.caffemodel
I0413 16:11:42.270309 22206 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_9660.solverstate
I0413 16:11:42.342959 22206 solver.cpp:362] Iteration 9660, Testing net (#0)
I0413 16:11:42.342983 22206 net.cpp:723] Ignoring source layer train_data
I0413 16:11:42.342988 22206 net.cpp:723] Ignoring source layer train_label
I0413 16:11:42.342991 22206 net.cpp:723] Ignoring source layer train_transform
I0413 16:12:01.052938 22206 solver.cpp:429] Test net output #0: loss_bbox = 0 (* 2 = 0 loss)
I0413 16:12:01.052963 22206 solver.cpp:429] Test net output #1: loss_coverage = 0 (* 1 = 0 loss)
I0413 16:12:01.052969 22206 solver.cpp:429] Test net output #2: mAP = 0
I0413 16:12:01.052974 22206 solver.cpp:429] Test net output #3: precision = 0
I0413 16:12:01.052979 22206 solver.cpp:429] Test net output #4: recall = 0
I0413 16:12:01.052984 22206 solver.cpp:347] Optimization Done.
I0413 16:12:01.052987 22206 caffe.cpp:234] Optimization Done.
What should i change ignorer to improve my accuracy from zero in DIGITS using caffe

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants