-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Finetuning out-of-memory and lack of output #682
Comments
update: F0713 21:28:27.059324 3532 syncedmem.cpp:47] Check failed: error == cudaSuccess (2 vs. 0) out of memory is there any clue? |
The message is clear you are out of memory in your GPU card. So you will Sergio 2014-07-13 6:34 GMT-07:00 htzheng [email protected]:
|
@sguada Thank you! Now i change to a smaller batch_size and problem solved. There is a tiny issue for finetune_net.bin on Ubuntu OS: LOG(INFO) does not print info on terminal. I change LOG(INFO) to std::cout. As observed, the finetuning code works well. |
Happy that you figured it out. For logging do
|
I am trying to implement DeepFace model and the memory required for test 49724060 (with batch size, finally 1) and for train 49724060 (again with batch size 1).... This makes total memory required around 94 MB. But I am still having the 'out of memory' problem. I have got nvidia geforce GT 650M as my GPU. Viewing the status of GPU using $nvidia-smi -q, I can see that the total Memory (FB) is 2047 Mib and Free memory is 1646 MiB. Can anyone point me what am I blinkering?? |
I got the same GPU error problem when I trained imagenet example. Even though, the limitation of the batch size to 4 from 256 in train_val.prototxt, which fixed the short of my GPU memory issue. I had the second thought about changing solver_mode to CPU from GPU because my GPU had only 512MB of memory due to my old Macbook pro. CPU mode seemed to train without the problem. I think I should use GPU mode for massive parallel computing but I only have several PCs and laptops and I am just learning caffe for my study. Should I use GPU mode always? I think GPU mode can optimize more for accelerating computing speed but not for my caffe learning phase. However, I have to build a new PC for training caffe. Please give me advise for what type of GPUs are good enough for caffe. I am thinking to buy NVIDIA GeForce GTX960 with 2GB memory. I heard GPU with 3GB or above memory is sufficient for caffe. |
when i try install fast rcnn than i got like this error? how to slove it? Loaded network /home/rvlab/Music/fast-rcnn/data/fast_rcnn_models/vgg16_fast_rcnn_iter_40000.caffemodel
|
I get this error when running the following:
@sguada how should I reduce the batch size? in which file? can you show an example? |
@monajalal Have you figured out which file it was? :D |
@Dulex123 @monajalal If you're using py-faster-rcnn then, you can change the Also, I would suggest you take a look at this issue. Hope this helps! |
use small batch_size will work |
I changed my batch_size from 128 to 32, but still it fails. So we need to build again after changing the config file? |
Hi, i plan to apply the pretrained ImageNet model for a 2-class classification task. So I need to modify the fc8 layer and further finetune the network. I follow shelhamer's suggestion in #186.
Here is what i do:
after that, the terminal doesn't response for a long time without output.
I'm new to caffe, could someone tell me how to finetune a existing model. Thanks for any reply!
The text was updated successfully, but these errors were encountered: