-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can't utilize GPU to accelerate computation. #55
Comments
I have the same problem!
|
Here is how we have currently compute deconvolution: For each layer for which we want the deconvolution output, we have 1 forward pass. Then, we have 8 parallel backward passes for deconvolution of 8 channels in a layer happening simultaneously. Then, the next 8 feature maps and so on. This value 8 was chosen to account for memory limitations. Also, there is no particular learning happening here - so exploitation of GPU computations for optimizer like repeated operations on same data won't be possible as in the case of learning weights. Do suggest if you see some other possible solutions. |
OK, thanks for the reply.
Falak <[email protected]> 于2018年8月14日周二 下午2:28写道:
… Here is how we have currently compute deconvolution: For each layer for
which we want the deconvolution output, we have 1 forward pass. Then, we
have 8 parallel backward passes for deconvolution of 8 channels in a layer
happening simultaneously. Then, the next 8 feature maps and so on. This
value 8 was chosen to account for memory limitations. Also, there is no
particular learning happening here - so exploitation of GPU computations
for optimizer like repeated operations on same data won't be possible as in
the case of learning weights. Do suggest if you see some other possible
solutions.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#55 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AfDgshYuRuA3nbr0cUkPZfL_usAsIcFlks5uQm4IgaJpZM4VD5V8>
.
|
Hi,
I tried to visualize the InceptionV4 model layers and fed a MRI image to it, everything works well except that the GPU seems did not involved in computing. TensorFlow do allocate graphic memory for the process but the GPU utilization rate is at 0%. Most of time only one CPU core is working. How can I utilize GPU to accelerate? below are my codes.
`
import tensorflow as tf
from tf_cnnvis import *
from nets.inception_v4 import inception_v4_base, inception_v4_arg_scope
import matplotlib.image as mpimg
import numpy as np
slim = tf.contrib.slim
if name == 'main':
X = tf.placeholder(tf.float32, [None, 160, 160, 3])
img = mpimg.imread('data/image.png')
img = img[42:202, 3:163]
img = np.stack([img, img, img], axis=2)
img = np.reshape(img, [1, 160, 160, 3])
`
Thanks for the excellent work : )
The text was updated successfully, but these errors were encountered: