-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Fix for ipfs.files.add
+ added corresponding tests
#104
Conversation
Great ideas here @bgrieder! Thanks for putting in time on this. I am working on the same functions. Something I did not think about was array inputs as well as buffer inputs. The data-importer can handle buffers so this shouldn't be a problem. I notice that the go-ipfs cli will report an error when a bad file path is provided in the array. So the cli does support multiple files in an array with a propagated error. I imagine the api should behave similarly to js-ipfs-api. I am also working on the export functions for cat/add/ls. We are currently planning to add the export to the data-importing module as you stated above. My current implementation requires the export functions like this. Your test LGTM but I would add browser tests as well a larger file and nested dir. Feel free to drop by #ipfs on freenode and ping me, voxelot, if you'd like to work on this with me. I'm hoping to have these commands working this weekend. |
@nginnever Great. I do not want to duplicate efforts but if I can help in any way, let me know. I will add a couple more tests |
Sounds great! Would love to have an extra pair of eyes and ideas on what I'm writing and I'm sure @diasdavid would appreciate the help in CR of my stuff. EDIT* Also feel free to improve the 'files add' without me. I only did what you did plus a check to see if it's a path being supplied which could be done in the 'is-ipfs' module we have. I'll be working on the exporter function in 'data-importing' for the next 8 hours and then if you could put some ideas into my js-ipfs fork on my repo that would be awesome. When you go to write the browser tests for add in something like test-files-browser.js, I think just doing a buffer input will be good enough until we hear otherwise. Since a browser doesn't have an FS to read a path to a file from. Cheers! |
Added nested directories + large file + buffer tests. I cannot get a browser test going for the moment because simply re-enabling 'ipf-data-importing' in the webpack conf of the karma.conf ends up with an illegal attempt to call |
\o/, more people to help, awesome! 🌟
Yes, we definitely want
That is awesome! You might be interested in As @nginnever mentioned, he has been leading the charge on the Files API and its deps for |
@diasdavid glad I can help. #60 is a good summary and work base. @nginnever browser tests: the problem lies with the library js-ipfs-unifs. This library is called by the data importer to import a
Unfortunately this library reads the the protobuf schema from the file system
Obviously this cannot work in a browser. I opened an issue against that lib: ipfs/js-ipfs-unixfs#5 |
@bgrieder thanks for adding more tests and looking into this more! When I asked you to test larger files it got me investigating larger files in the go-ipfs implementation and I caught some sharding issues that may require us to rework the import and export logic to handle a merkle tree that doesn't just have an infinite amount of leaves on the 2nd level.
Good catch! This seems odd to me. You are certainly correct that the schema should not be able to be loaded from fs in the browser. Yet when I was testing the In any case, we have done something as simple as this in the past to handle the protobuf. If you want to make a PR against the unixfs module to close this issue ipfs/js-ipfs-unixfs#5 @diasdavid, I am going to clean up my local repo and link my fork for #60 or open a PR on js-ipfs to document where I am in the current build so @bgrieder can take a look and figure where he can add ideas. Look for that in about an hour or two. I currently have |
@nginnever ipfs-js-unixfs: I am having problems getting dignified.js run browser test. See ipfs/js-ipfs-unixfs#6. I have asked @dignifiedquire for help |
@nginnever as explained in ipfs/js-ipfs-unixfs#6, the issue with |
@nginnever I had a look at your repos at https://github.com/nginnever/js-ipfs-data-importing and https://github.com/nginnever/js-ipfs but I doubt they have your latest commits. I would be glad to test your code by incorporating it into my own project and giving you feedback |
@bgrieder I decided to go ahead and work on finishing |
@nginnever Great... and my apologies if you felt pressured, that was not my intention. |
No worries! I really want to work hard on this so it's nice to get some push. Sorry for not being able to fork the code yet. Not exactly ready to be made into a PR but you can see the direction I'm heading. Going to hack on this the next couple of days and then you'll be able to fork. I'll out a PR out before I write the tests for get and cat so you can help me with that if youd like. As well as the new data-importer:export functions. |
It is all green now :) the dignified pipeline isn't a 'full buy in', just a convenience, you are even able to use js-ipfs-unixfs without any bundler at all (which is awesome!). |
@nginnever I started to look at your code in js-ipfs-data-importing and have a couple of initial comments:
The export signature would thus be something like either
I have prepared a Repo with a small file, a large file and a buffer which will ease writing tests on export. |
@nginnever On second thought, I believe the signature should be something like
because
Converting stream to files would be a function of js.ipfs |
@bgrieder Thank you for the CR!
True for go-ipfs afaik, the other option there is a chunker since in the future we will want to support different sized nodes for different types of files for optimizations. Think we should leave an options object for now in case we want to add features in the future.
That is actually how I had it originally but we decided at some point that we wanted leave as much FS manipulation as possible out of the importer/exporter and make it's job more concise, chunking and bundling data.
All of this is great! I was actually talking with @diasdavid last night about the best way to return the data in the exporter after realizing that I couldn't do recursive operations needed to keep track of the files in the The above method will buffer a lot of data into memory so this isn't ideal. The second method will be to use an event emitter in the cli core to listen for incoming files and consume them as they are pulled out of the database.
I'm going to be hacking these ideas together today and will hopefully have something working tomorrow that I can PR for you to add tests to. Also need some help with the HTTP API side of things. We really want to get these commands running in the browser this week!! :D Cheers! |
- Adds types - Converts to ESM - Uses [ejs](https://www.npmjs.com/package/ejs) for html templating
This is a small fix to
ipfs.files.add()
with the corresponding testsI noticed that in https://github.com/ipfs/js-ipfs-api#add, the equivalent call accepts a file or an array of files. Should this module follow a similar API ?
If yes, I can patch that as well however, when an array is provided, the API is silent on what happens when one of the adds fail: should it fail-fast and stop or should it continue and report an Error in the returned array for that file, etc...
Also I am willing to to work on
ipfs.files.cat()
andipfs.files.ls()
but I wonder where I should put the code. Foradd
it is residing in https://github.com/ipfs/js-ipfs-data-importing. Should this package contain the exporting code too and be renamed accordingly or should the functionality be developed on top of the block methods in js-ipfs?Finally, on a side note, I am pretty excited with what is going on with IPFS. I have been one of the active maintainer of JXTA which is (was) a full fledge P2P package in Java. My company is actually running a peer to peer network on it in production. I always wanted to do this in JS but never found the time. Kudos.