Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Finetune using whole images instead of bounding boxes #631

Closed
caffecuda opened this issue Jul 6, 2014 · 13 comments
Closed

Finetune using whole images instead of bounding boxes #631

caffecuda opened this issue Jul 6, 2014 · 13 comments

Comments

@caffecuda
Copy link

Hello,

With an old version of Caffe (master released downloaded in April) it's possible to finetune by specifying leveldb as sources instead of window_file_.txt. With the new version however I'm getting this error:

libprotobuf ERROR google/protobuf/text_format.cc:172] Error parsing text-format caffe.NetParameter: 13:17: Message type "caffe.DataParameter" has no field named "fg_threshold".
F0706 22:01:11.571449 4524 upgrade_proto.cpp:571] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: pascal_finetune_train.prototxt

I have changed "WINDOW_DATA" to "DATA" and "window_data_param" to "data_param" etc. Thanks!

@caffecuda
Copy link
Author

Hi,
Basically, I'd like to know how to finetune using new images and labels, not bounding boxes.
Thanks.

@caffecuda caffecuda changed the title Finetune on a new dataset with *leveldb as sources Finetune using whole images instead of bounding boxes Jul 9, 2014
@shelhamer
Copy link
Member

Make your new leveldb of input images and labels. Change the leveldb source
in your net prototxt. Write a solver prototxt. Call finetune_net.bin

Le jeudi 10 juillet 2014, caffecuda [email protected] a écrit :

Hi,
Basically, I'd like to know how to finetune using new images and labels,
not bounding boxes.
Thanks.


Reply to this email directly or view it on GitHub
#631 (comment).

Evan Shelhamer

@caffecuda
Copy link
Author

@shelhamer Thanks for the response. That's basically what I did, only that I didn't start from scratch and adapted the prototxt files in "pascal-finetuning/". The question is whether to keep these fields:

fg_threshold: 0.5
bg_threshold: 0.5
fg_fraction: 0.25
context_pad: 16
crop_mode: "warp"

If they are kept I get:

libprotobuf ERROR google/protobuf/text_format.cc:172] Error parsing text-format caffe.NetParameter: 13:17: Message type "caffe.DataParameter" has no field named "fg_threshold".
F0706 22:01:11.571449 4524 upgrade_proto.cpp:571] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: pascal_finetune_train.prototxt

if they are removed, I get:

F0710 14:42:51.490083 9965 data_layer.cpp:254] Check failed: prefetch_rng_

PS with an old version of Caffe (downloaded in April) keeping these fields worked.

@shelhamer
Copy link
Member

Look at examples/imagenet instead. If you're not doing windowed finetuning,
don't use the WindowDataLayer. A regular DataLayer will do fine.

Le jeudi 10 juillet 2014, caffecuda [email protected] a écrit :

@shelhamer https://github.com/shelhamer Thanks for the response. That's
basically what I did, only that I didn't start from scratch and adapted the
prototxt files in "pascal-finetuning/". The question is whether to keep
these fields:

fg_threshold: 0.5
bg_threshold: 0.5
fg_fraction: 0.25
context_pad: 16
crop_mode: "warp"

If they are kept I get:

libprotobuf ERROR google/protobuf/text_format.cc:172] Error parsing
text-format caffe.NetParameter: 13:17: Message type "caffe.DataParameter"
has no field named "fg_threshold".
F0706 22:01:11.571449 4524 upgrade_proto.cpp:571] Check failed:
ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file:
pascal_finetune_train.prototxt

if they are removed, I get:

F0710 14:42:51.490083 9965 data_layer.cpp:254] Check failed: prefetch_rng_

PS with an old version of Caffe (downloaded in April) keeping these fields
worked.


Reply to this email directly or view it on GitHub
#631 (comment).

@caffecuda
Copy link
Author

@shelhamer The only difference between my prototxt and those in examples/imagenet/imagenet is they use different leveldbs (plus of course different fc8 layers):

http://pastebin.com/6YgVUM00

where imagenet10 is my toy problem with 10 imagenet classes. While training works on examples/imagenet, on my finetuning toy problem it produces this error:

I0710 15:56:32.887979 18611 finetune_net.cpp:27] Loading from caffe_reference_imagenet_model
I0710 15:56:36.863538 18611 solver.cpp:61] Solving CaffeNet
I0710 15:56:36.863694 18611 solver.cpp:106] Iteration 0, Testing net
F0710 15:56:37.043587 18645 data_layer.cpp:254] Check failed: prefetch_rng_
*** Check failure stack trace: ***
@ 0x2b9993c6bb7d google::LogMessage::Fail()
@ 0x2b9993c6dc7f google::LogMessage::SendToLog()
@ 0x2b9993c6b76c google::LogMessage::Flush()
@ 0x2b9993c6e51d google::LogMessageFatal::~LogMessageFatal()
@ 0x4880f5 caffe::DataLayer<>::PrefetchRand()
@ 0x486e07 caffe::DataLayerPrefetch<>()
@ 0x2b9993a47e9a start_thread
@ 0x2b99961ec3fd (unknown)
Aborted (core dumped)

@caffecuda
Copy link
Author

@Mezn Hi, would you mind sharing your prototxt files for finetuning using whole images instead of windows? Many thanks.

@wendlerc
Copy link

Finetune prototxt for a binary classification task, the only layer that has to be modified is the last fully connected one (outputs changed from 1000 to 2 and name changed.)

Training:
http://pastebin.com/WdXTsJzP
Validation:
http://pastebin.com/iPs6xuAM

To finetune you have to call finetune_net.bin solver.prototxt.
Feel free to ask further questions if this was not sufficient.

Best regards,

Chris

@caffecuda
Copy link
Author

@Mezn @shelhamer The problem was caused by the fact that I have "mirror: true" in the val.prototxt file. Comparing with Chris' version helped identify the problem, thanks!

The pascal finetune example has
blobs_lr: 10
blobs_lr: 20
for fc8_pascal while the imagenet example has
blobs_lr: 1
blobs_lr: 2
for fc8, which setting should I be usng when finetuning? Is there a rule of thumb for these parameters?
Thanks!

@wendlerc
Copy link

I don't exactly know what blobs_lr does, for my experiments both worked (in terms of precision I could not see a difference, but this could be caused by the fact that I only considered very small examples yet). However a more sophisticated explanation would be very helpful! I am glad that I was able to help :)

Best regards,
Chris

@htzheng
Copy link

htzheng commented Jul 13, 2014

@Mezn hello, if i call finetune_net.bin solver.prototxt in the terminal, it gives error
finetune_net.cpp:18] Usage: finetune_net solver_proto_file pretrained_net
but if i call finetune_net.bin solver.prototxt caffe_reference_imagenet_model, the terminal will stop working without any output.

I'm not sure about the usage of finetune_net.bin, is there some information that might help? Thank you!

@wendlerc
Copy link

You are using it right, however I don't know why it crashes without any output in your case, that never happened to me. Maybe there is something wrong with the google logging library (wildshot). Did the MNIST example network training work for you? (http://caffe.berkeleyvision.org/gathered/examples/mnist.html)

Good luck & best regards,
Chris

@sguada
Copy link
Contributor

sguada commented Jul 13, 2014

You need to set GLOG_logtostderr=1 to see outputs of log

Sergio

2014-07-13 7:45 GMT-07:00 mezN [email protected]:

You are using it right, however I don't know why it crashes without any
output in your case, that never happened to me. Maybe there is something
wrong with the google logging library (wilshot). Did the MNIST example
network training work for you? (
http://caffe.berkeleyvision.org/gathered/examples/mnist.html)

Good luck & best regards,
Chris


Reply to this email directly or view it on GitHub
#631 (comment).

@htzheng
Copy link

htzheng commented Jul 15, 2014

Thank you! It's a downstream problem. GLOG_logtostderr=1 will allow code
to output the log. The code now works well.

2014-07-13 22:45 GMT+08:00 mezN [email protected]:

You are using it right, however I don't know why it crashes without any
output in your case, that never happened to me. Maybe there is something
wrong with the google logging library (wilshot). Did the MNIST example
network training work for you? (
http://caffe.berkeleyvision.org/gathered/examples/mnist.html)

Good luck & best regards,
Chris


Reply to this email directly or view it on GitHub
#631 (comment).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants