Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running Deep Occlusion Fully on CPU #25

Open
emmanuec opened this issue Mar 14, 2018 · 5 comments
Open

Running Deep Occlusion Fully on CPU #25

emmanuec opened this issue Mar 14, 2018 · 5 comments

Comments

@emmanuec
Copy link

Hello,

Thank you for your work!

I have a quick question: I think the answer to the question is yes, but I wanted to confirm in case I set up something incorrectly:

Is it normal for the Deep Occlusion model to run really slow when it is used on CPU only?

Thank you!

@pierrebaque
Copy link
Owner

Hello,

Yes, it is normal. The inference part runs only on CPU but the other parts should be ran on GPU.

Cheers

@nagasanthoshp
Copy link

nagasanthoshp commented May 4, 2018

@emmanuec Does the CPU you are working has Nvidia? In my system, it isn't. And I am keen in running the model. Could you help me in any case?

Thank you!

@emmanuec
Copy link
Author

@nagasanthoshp My CPU is Intel i7. My GPU is Nvidia. Usually, you will need to use an Nvidia GPU to run the model in GPU mode. For CPU, it should work with any.

@Sunny-Yu
Copy link

@pierrebaque Hello, I also have this puzzle. I run RunParts.ipynb successfully, but it calculates one image per 5 minute. I have checked that CPU and GPU are both used but the GPU-Util is between 0% and 1% from nvidia-smi. So I hope you can confirm if this is normal. My CPU is Intel i7, and GPU is Titan X. Thank you.

@soullessrobot
Copy link

@emmanuec Can RunUnaries.ipynb and RunPom.ipynb run successfully on CPU?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants