Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

MXNet: Using FusedRNNCell with its "bidirectional" flag turned True, can lead to hanging of training run. #9171

Closed
kalpitdixit opened this issue Dec 21, 2017 · 12 comments

Comments

@kalpitdixit
Copy link

Description

MXNet
Using FusedRNNCell with its "bidirectional" flag turned True, can lead to hanging (i.e. infinite pause without progress/error/crash) of training run.

Details

I am running a single training run of a Sequence-to-Sequence model using the BucketingModule. Iam using an Encoder-Decoder network. I am using a FusedRNNCell with its "bidirectional" flag turned on for the Encoder and an unfused RNNCell for the Decoder.
GPU utilization is 15000MB / 16000MB. CPU utilization is 95%.
For each batch during training, I do a forward() pass and a backward() pass. After a 5-15 epochs, the training run gets stuck in the forward() pass of one of the mini-batches. The forward pass does not complete. No errors are thrown nor does anything crash. GPU/CPU utilization remains identically the same.

I have tried an ablation of many-many things in my training run (architecture, data, code etc). The conclusion is that specifically using the FusedRNNCell with the "bidirectional" flag turned True causes this problem.

Package used

Python

Environment info

----------Python Info----------
Version : 3.5.2
Compiler : GCC 5.4.0 20160609
Build : ('default', 'Nov 23 2017 16:37:01')
Arch : ('64bit', 'ELF')
------------Pip Info-----------
Version : 9.0.1
Directory : /usr/local/lib/python3.5/dist-packages/pip
----------MXNet Info-----------
Version : 1.0.0
Directory : /usr/local/lib/python3.5/dist-packages/mxnet
Commit Hash : 25720d0
----------System Info----------
Platform : Linux-4.4.0-1039-aws-x86_64-with-Ubuntu-16.04-xenial
system : Linux
node : ip-172-31-85-194
release : 4.4.0-1039-aws
version : #48-Ubuntu SMP Wed Oct 11 15:15:01 UTC 2017
----------Hardware Info----------
machine : x86_64
processor : x86_64
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
Stepping: 1
CPU MHz: 1200.582
CPU max MHz: 3000.0000
CPU min MHz: 1200.0000
BogoMIPS: 4600.09
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 46080K
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq monitor est ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx xsaveopt ida
----------Network Test----------
Setting timeout: 10
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0300 sec, LOAD: 0.0514 sec.
Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.1141 sec, LOAD: 0.1956 sec.
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0016 sec, LOAD: 0.4062 sec.
Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.1799 sec, LOAD: 0.3847 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0046 sec, LOAD: 0.0126 sec.
Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0154 sec, LOAD: 0.1567 sec.

@kalpitdixit
Copy link
Author

I am using:
MXNet==1.0.0
CUDA==9.0
cuDNN==7.0

As I understand, the FusedRNNCell is faster than the unused RNNCell because it makes direct function calls to a cuda kernel. It seems that the "bidirectional" flag in FusedRNNCell is directly passed to the cuda kernel call. This is just fyi. This might imply some cuda kernel issue but I am not a cuda expert.

@eric-haibin-lin

@kalpitdixit
Copy link
Author

kalpitdixit commented Dec 21, 2017

Want (but leads to hanging)

cell = FusedRNNCell(.... bidirectional=True....)

best workaround (but still slow)

l_cell = FusedRNNCell(.... bidirectional=False....)
r_cell = FusedRNNCell(.... bidirectional=False....)
cell = BidirectionalCell(l_cell, r_cell....)

All other workarounds are 3x-10x slower than what we ideally "Want" to use above. This workaround is "only" 2x slower.

@szha
Copy link
Member

szha commented Dec 21, 2017

What's the patch version for cudnn? Would you confirm if the hanging still happens with the latest cuda 9.0.176/9.1.x and cudnn 7.0.5? Also, what's the GPU?

@szha
Copy link
Member

szha commented Dec 21, 2017

Could you provide runnable code snippet that reproduces the hanging problem? You can use random input if data is not related to the hanging problem.

@kalpitdixit
Copy link
Author

@szha
I tested CUDA 9.0.176 with cuDNN 7.0.3 and with cuDNN 7.0.5.
In both cases, the hanging problem happens.

@SuperLinguini
Copy link

Proposed labels: Bug, Python, RNN

@szha
Copy link
Member

szha commented Mar 20, 2018

@kalpitdixit does the problem still happen?

@eric-haibin-lin
Copy link
Member

Hi @DickJC123 this is the issue I mentioned with fused RNN with bidirectional=True.
@sxjscience this is also related to the error you're seeing?

@sxjscience
Copy link
Member

Looks similar. I'm trying to give a MWE.

@sxjscience
Copy link
Member

The error message:

Traceback (most recent call last):
  File "sentiment_analysis.py", line 270, in <module>
    train(args)
  File "sentiment_analysis.py", line 261, in train
    test_avg_L, test_acc = evaluate(net, test_dataloader, context)
  File "sentiment_analysis.py", line 136, in evaluate
    total_L += L.sum().asscalar()
  File "/home/ubuntu/mxnet/python/mxnet/ndarray/ndarray.py", line 1844, in asscalar
    return self.asnumpy()[0]
  File "/home/ubuntu/mxnet/python/mxnet/ndarray/ndarray.py", line 1826, in asnumpy
    ctypes.c_size_t(data.size)))
  File "/home/ubuntu/mxnet/python/mxnet/base.py", line 149, in check_call
    raise MXNetError(py_str(_LIB.MXGetLastError()))
mxnet.base.MXNetError: [22:08:57] src/operator/./cudnn_rnn-inl.h:457: Check failed: e == CUDNN_STATUS_SUCCESS (8 vs. 0) cuDNN: CUDNN_STATUS_EXECUTION_FAILED

Stack trace returned 10 entries:
[bt] (0) /home/ubuntu/mxnet/python/mxnet/../../lib/libmxnet.so(dmlc::StackTrace[abi:cxx11]()+0x5b) [0x7f4cee092c5b]
[bt] (1) /home/ubuntu/mxnet/python/mxnet/../../lib/libmxnet.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x28) [0x7f4cee093798]
[bt] (2) /home/ubuntu/mxnet/python/mxnet/../../lib/libmxnet.so(mxnet::op::CuDNNRNNOp<float>::Init(mshadow::Stream<mshadow::gpu>*, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&)+0x2142) [0x7f4cf27b4f22]
[bt] (3) /home/ubuntu/mxnet/python/mxnet/../../lib/libmxnet.so(mxnet::op::CuDNNRNNOp<float>::Forward(mxnet::OpContext const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::OpReqType, std::allocator<mxnet::OpReqType> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&, std::vector<mxnet::TBlob, std::allocator<mxnet::TBlob> > const&)+0xa5d) [0x7f4cf27c2f1d]

@vandanavk
Copy link
Contributor

@kalpitdixit Could you provide a script with which this issue occurs?

@kalpitdixit
Copy link
Author

kalpitdixit commented Aug 14, 2018

@vandanavk
Re-ran my code on the latest version of MXNet. This issues does not happen any longer.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants