You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When trying to train DeepCpG, on a GPU system, I get very low GPU utility (approximately 25%). Do you perhaps have an idea what could be the cause? I tried increasing the number of workers for the data loader, but it didn't help.
I initially forgot to set data_nb_worker, so it defaulted to 1. I increased it to 8 and found no improvement. In both cases, the GPU utility is constantly at only about 25%.
Thanks,
Rene
The text was updated successfully, but these errors were encountered:
On Tue, Aug 6, 2019 at 4:14 AM snajder-r ***@***.***> wrote:
Hi!
When trying to train DeepCpG, on a GPU system, I get very low GPU utility
(approximately 25%). Do you perhaps have an idea what could be the cause? I
tried increasing the number of workers for the data loader, but it didn't
help.
Here are the parameters I have been using:
dcpg_train.py ${mydatadir}/train/* --val_file ${mydatadir}/val/* --out_dir
${mydatadir}/model/ --dna_model CnnL2h128 --cpg_model RnnL1 --joint_model
JointL2h512 --nb_epoch 10 --data_nb_worker 8 --data_q_size 20
--batch_size=512
I initially forgot to set data_nb_worker, so it defaulted to 1. I
increased it to 8 and found no improvement. In both cases, the GPU utility
is constantly at only about 25%.
Thanks,
Rene
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#27?email_source=notifications&email_token=ABEVZM23AWYS6HYVMNF6533QDFMHXA5CNFSM4IJVHMYKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HDTKENQ>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABEVZM6AWOHB6JMZWSZVY7TQDFMHXANCNFSM4IJVHMYA>
.
Hi!
When trying to train DeepCpG, on a GPU system, I get very low GPU utility (approximately 25%). Do you perhaps have an idea what could be the cause? I tried increasing the number of workers for the data loader, but it didn't help.
Here are the parameters I have been using:
dcpg_train.py ${mydatadir}/train/* --val_file ${mydatadir}/val/* --out_dir ${mydatadir}/model/ --dna_model CnnL2h128 --cpg_model RnnL1 --joint_model JointL2h512 --nb_epoch 10 --data_nb_worker 8 --data_q_size 20 --batch_size=512
I initially forgot to set data_nb_worker, so it defaulted to 1. I increased it to 8 and found no improvement. In both cases, the GPU utility is constantly at only about 25%.
Thanks,
Rene
The text was updated successfully, but these errors were encountered: