Skip to content
This repository has been archived by the owner on Mar 10, 2020. It is now read-only.

fix(get): properly handled nested content #384

Merged
merged 3 commits into from
Nov 8, 2016
Merged

fix(get): properly handled nested content #384

merged 3 commits into from
Nov 8, 2016

Conversation

dignifiedquire
Copy link
Contributor

@dignifiedquire dignifiedquire commented Sep 16, 2016

})
next()
ex.on('entry', (header, stream, next) => {
stream.on('end', next)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This means that now, one file has to finish in order for us to 'receive' the next. Not sure how this is an improvement, the previous version was pushing a readable stream that gets filled as soon as we know the file exists.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has to happen already if you read the documentation of tar-stream. This only makes sure that we actually process things correctly. The current implementation throws up and dies on larger sets of files.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checking https://www.npmjs.com/package/tar-stream#extracting, it only says that "next" will emit the next entry, that being said, I should be able to get a readable stream for all the files within the tar first and then consume them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@diasdavid you missed this part:

The tar archive is streamed sequentially, meaning you must drain each entry's stream as you get them or else the main extract stream will receive backpressure and stop reading.

@daviddias
Copy link
Contributor

.get tests are failing with timeouts

  1) .files .get with a base58 encoded multihash:
     Error: timeout of 80000ms exceeded. Ensure the done() callback is being called in this test.

@dignifiedquire
Copy link
Contributor Author

dignifiedquire commented Sep 25, 2016

.get tests are failing with timeouts

@diasdavid

Yes they can't finish, because they do not drain the content of the file for each received item, instead they try to do concat which waits for the end which never happens.

I suggest changing the interface-ipfs-core tests accordingly for now, as the current version is broken for everything with multiple files and the alternative is writing our own tar extractor.

@daviddias
Copy link
Contributor

I suggest changing the interface-ipfs-core tests accordingly for now, as the current version is broken for everything with multiple files and the alternative is writing our own tar extractor

@dignifiedquire I'm understanding what you mean, however, something is off because these tests were passing before, which means that calling next will indeed start the next stream, even if the current one hasn't flushed completely. It seems that it is a tar-stream documentation incoherence, which we kind of found by chance and that helped us achieve what we wanted.

@dignifiedquire
Copy link
Contributor Author

@diasdavid we still need to figure this one out

@dignifiedquire dignifiedquire changed the title [WIP] fix(get): properly handled nested content fix(get): properly handled nested content Nov 8, 2016
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants