Skip to content
This repository has been archived by the owner on Jul 7, 2023. It is now read-only.

AttributeError: module 'tensorflow.contrib.data' has no attribute 'parallel_interleave' #558

Closed
wachaong opened this issue Feb 5, 2018 · 19 comments

Comments

@wachaong
Copy link

wachaong commented Feb 5, 2018

File "/home/work/wangchao/python/lib/python3.6/site-packages/tensor2tensor/data_generators/problem.py", line 521, in dataset
tf.contrib.data.parallel_interleave(
AttributeError: module 'tensorflow.contrib.data' has no attribute 'parallel_interleave'

t2t version : 1.4.3
tf version : 1.4.1

@martinpopel
Copy link
Contributor

T2T 1.4.3 needs TF 1.5 (as stated in setup.py).

@wachaong
Copy link
Author

wachaong commented Feb 6, 2018

3q

@wachaong wachaong closed this as completed Feb 6, 2018
@Alexyitx
Copy link

i also meet this question ,can you tell me how to solve it?

@martinpopel
Copy link
Contributor

@lihongwei521: update to the newest T2T and if possibly also TF>=1.5.
There is a check so maybe it works with older TF as well.

@RunshengZhu
Copy link

@martinpopel the check you provided works~ my problem solved , training process well in tf1.4, thx!

@Tomandjob
Copy link

i meet this issue too in tf1.4, could you tell me how to fix it,please?

@martinpopel
Copy link
Contributor

@Yangandtom: If you cannot update to newer TF, you must downgrade T2T to a version which supports TF1.4 (this info is in setup.py).

@Tomandjob
Copy link

the setup,py is mean? I find the setup.py in the check which you provided, the version='1.5.6'

@martinpopel
Copy link
Contributor

Here you can see that T2T 1.6.0 requires tensorflow>=1.5.0.
And here you can see that T2T 1.5.7 required tensorflow>=1.4.1.
Similarly, e.g. T2T 1.2.9 supported tensorflow>=1.3.0

@Tomandjob
Copy link

I tried to install T2T 1.2.9 in tf 1.4.0. But this issue still exists.

@samanthawyf
Copy link

@Yangandtom @martinpopel I met the same problem, module has no attribute 'parallel_interleave'
My tensorflow is 1.4.0 and I have not installed T2T. From your reply, I get that T2T 1.2.9 supported tensorflow>=1.3.0, I am wondering could it solve the problem by installing T2T 1.2.9. And I also want to ask how to install T2T 1.2.9, as command pip install tensor2tensorinstalls newest version.

@martinpopel
Copy link
Contributor

pip install tensor2tensor==1.2.9

@samanthawyf
Copy link

@martinpopel Thanks for your reply. After I installed T2T 1.2.9, I still meet the problem as @Yangandtom . I doubt tensorflow1.4 doesn't support tensorflow.contrib.data.parallel_interleave?

@simo23
Copy link

simo23 commented Jun 21, 2018

If you are using TF1.4, a non-optimal solution is to call the function sloppy_interleave TF1.4Docs, which I believe is the older version of the function.

This worked for me.

@levinwil
Copy link

levinwil commented Jul 9, 2018

This is because you are using an old version of tensorflow.

The easiest fix is to install the latest version of tensorflow.

If you need to be using your current version of tensorflow, perhaps for compatability with CUDA 8, you can change the line to the following:
"records_dataset = filename_dataset.interleave(file_read_func, cycle_length=config.num_readers, block_length=config.read_block_length)"

@paullinzipeng
Copy link

@levinwil This worked like magic to me! Could you tell me where to find this code?

@levinwil
Copy link

I wrote it myself. Glad I could help!

@paullinzipeng
Copy link

Thank you! I've been trying to fix this problem for days!

@guilhermeh2m
Copy link

@ levinwil Thanks a lot man! Its work for me in Tensorflow 1.4
I comment the "records_dataset=..." in the file "...tensorflow/models/research/object_detection/builders/dataset_builder.py"

and add your line instead:
"records_dataset = filename_dataset.interleave(file_read_func, cycle_length=config.num_readers, block_length=config.read_block_length)"

and works to train mobilenet with tensorflow API

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants