This repository has been archived by the owner on Nov 17, 2023. It is now read-only.
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[v1.x] provide a faster PrefetchedDataLoader (#19748)
* provide a faster PrefetchedDataLoader Now, since my programming skill is very poor, this `PrefetchedDataLoader` only allow generate a single iter at the same time. the benefit of `PrefetchedDataLoader` is that, `PrefetchedDataLoader` provides better performance with a simple replacement in most of the existing codes. test: ```python $ cat iternew.py && python iternew.py import mxnet as mx from mxnet.gluon.data import PrefetchedDataLoader as DataLoader,ArrayDataset from time import sleep,perf_counter_ns train_data=ArrayDataset(mx.nd.array([[i] for i in range(50000)]),mx.nd.array([[99-i] for i in range(50000)])) test_data=ArrayDataset(mx.nd.array([[i] for i in range(10000)]),mx.nd.array([[99-i] for i in range(10000)])) def transform_train(sample): sleep(0.0016) return sample def transform_test(sample): sleep(0.0008) return sample train_iter=DataLoader(train_data.transform_first(transform_train),batch_size=500,num_workers=10) test_iter =DataLoader(test_data .transform_first(transform_test ),batch_size=500,num_workers=10) if True: tic=perf_counter_ns() for epoch in range(10): print("epoch"+str(epoch)+" start at "+str(round((perf_counter_ns()-tic)*1e-9,2))+"s") for i in train_iter: sleep(0.1) print(" finished train phase at "+str(round((perf_counter_ns()-tic)*1e-9,2))+"s") for i in test_iter: sleep(0.05) print(" finished test phase at "+str(round((perf_counter_ns()-tic)*1e-9,2))+"s") print("cost="+str((perf_counter_ns()-tic)*1e-9)+"s") epoch0 start at 0.0s finished train phase at 11.25s finished test phase at 12.31s epoch1 start at 12.31s finished train phase at 22.62s finished test phase at 23.68s epoch2 start at 23.68s finished train phase at 34.03s finished test phase at 35.09s epoch3 start at 35.09s finished train phase at 45.41s finished test phase at 46.48s epoch4 start at 46.48s finished train phase at 56.82s finished test phase at 57.88s epoch5 start at 57.88s finished train phase at 68.24s finished test phase at 69.3s epoch6 start at 69.3s finished train phase at 79.65s finished test phase at 80.71s epoch7 start at 80.71s finished train phase at 91.04s finished test phase at 92.11s epoch8 start at 92.11s finished train phase at 102.46s finished test phase at 103.53s epoch9 start at 103.53s finished train phase at 113.89s finished test phase at 114.95s cost=114.94954171600001s ``` (cost ~`129.67192333600002s` if we are using `Dataloader` rather than `PrefetchedDataLoader`) * provide a faster PrefetchedDataLoader there already exists some faster dataloader in mxnet 2.0, but in v1.x, the exist dataloader is slower and could be improved by changing its prefetch behavior as what 2.0 have done. ```python $ cat iternew.py && python iternew.py import mxnet as mx from mxnet.gluon.data import PrefetchedDataLoader as DataLoader,ArrayDataset from time import sleep,perf_counter_ns train_data=ArrayDataset(mx.nd.array([[i] for i in range(50000)]),mx.nd.array([[99-i] for i in range(50000)])) test_data=ArrayDataset(mx.nd.array([[i] for i in range(10000)]),mx.nd.array([[99-i] for i in range(10000)])) def transform_train(sample): sleep(0.0016) return sample def transform_test(sample): sleep(0.0008) return sample train_iter=DataLoader(train_data.transform_first(transform_train),batch_size=500,num_workers=10) test_iter =DataLoader(test_data .transform_first(transform_test ),batch_size=500,num_workers=10) if True: tic=perf_counter_ns() for epoch in range(10): print("epoch"+str(epoch)+" start at "+str(round((perf_counter_ns()-tic)*1e-9,2))+"s") for i in train_iter: sleep(0.1) print(" finished train phase at "+str(round((perf_counter_ns()-tic)*1e-9,2))+"s") for i in test_iter: sleep(0.05) print(" finished test phase at "+str(round((perf_counter_ns()-tic)*1e-9,2))+"s") print("cost="+str((perf_counter_ns()-tic)*1e-9)+"s") epoch0 start at 0.0s finished train phase at 11.28s finished test phase at 12.35s epoch1 start at 12.35s finished train phase at 22.73s finished test phase at 23.79s epoch2 start at 23.79s finished train phase at 34.15s finished test phase at 35.21s epoch3 start at 35.22s finished train phase at 45.59s finished test phase at 46.66s epoch4 start at 46.66s finished train phase at 57.01s finished test phase at 58.07s epoch5 start at 58.07s finished train phase at 68.43s finished test phase at 69.5s epoch6 start at 69.5s finished train phase at 79.87s finished test phase at 80.93s epoch7 start at 80.93s finished train phase at 91.3s finished test phase at 92.37s epoch8 start at 92.37s finished train phase at 102.74s finished test phase at 103.8s epoch9 start at 103.8s finished train phase at 114.17s finished test phase at 115.23s cost=115.23376344s ``` * Update test_gluon_data.py add unittest for PrefetchedDataLoader * Update dataloader.py update document * delete trailing-whitespace * remove the modification of num_workers. * Update dataloader.py previous test shows that there may be something wrong with the `_MultiWorkerIter` according to the inappropriate __iter__() is called, I tried to fix it by moving the call here. * add an auto_reload flag into the dataloader the added flag is set to `True` rather than the default `False` since in mxnet 2.0, the default `nopython` mode prefetch data and auto reload it. * Update dataloader.py merge `prefetcheddataloader` into `dataloader` * Update dataloader.py remove whitespace * Update dataloader.py solve the warning in L726 * Update dataloader.py fix typo * Update dataloader.py fix the outdated perfetcheddataloader * using pytest for nested loop * change auto_reload to false. * Revert "using pytest for nested loop" This reverts commit 2c8d858. Co-authored-by: Leonard Lausen <[email protected]>
- Loading branch information