-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Plans to add ls, object.stat, swarm.connect? #3252
Comments
Thank you for submitting your first issue to this repository! A maintainer will be here shortly to triage and review.
Finally, remember to use https://discuss.ipfs.io if you just need general support. |
Yes, we're planning on fleshing out the API so thanks for mentioning the methods you use, it helps us prioritise accordingly. We're trying to retire the object API, have you considered using ipfs.files.stat to get the sizes of directories & files instead? E.g. const stats = await ipfs.files.stat('/ipfs/Qmfoo') |
@achingbrain Thanks! We'll try out |
The docs could definitely be improved here (and also js-IPFS supports IPFS paths in a few more places than go-IPFS) but the IPFS namespace is overlaid on top of the MFS commands so you can use Eg: ✅ |
Hi @bmann thanks for reaching out! I'm happy that fission team is looking into adopting shared worker and would love to help make that effort succeed. I am also happy to jump to a call so we can sync up on higher bandwidth channel.
I am afraid I don't have enough context about auth lobby or what that iframe is meant for, but I am a guessing that different apps run on different origins and iframe is meant to provide shared origin for the shared worker. If my guess is accurate that is pretty much the plan for shared ipfs node across origins. There are some details about it here https://hackmd.io/@gozala/H1LqrvvcI#Cross-Origin
Absolutely! That is a plan, but there are quite a few things that need to be hammered out (which linked document hints on), like:
We have arrived to the current API subset based on the feedback from teams that were interested in this effort. It would really help to get a better understand what are concrete APIs used for so we can meet those needs instead of aiming for full API compatibility with JS IPFS, mostly because doing so in an efficient way is going to be challenging and so far we have been finding that what devs need can be achieved in much simpler and efficient ways. |
@Gozala we’re in Discord chat at https://fission.codes/discord or send me an email at boris @ fission.codes — would love to get on a call. |
Hey @Gozala
You guessed correctly
Makes total sense. (2) We want
We could keep track of all that metadata ourselves, but I guess that would defeat the purpose of (3) And lastly, we want |
Thanks for clarifications, they do help. I do however still would like to understand more generally the use case. The reason is that all the teams I have talked to mostly used Dag API with IPFS network for storing and retrieving data, and tiny subset of file API mostly to refer to it from the data points. There is also yet unmet need for pubsub API. Given that insight we started thinking about https://github.com/ipfs/js-dag-service If fission's use cases and requirements are different it would really help to understand in which way, which can be little tricky to gather from pure API requirements.
All that information can be obtained from It would be also good to understand if MFS fits into fission in some way or if it is irrelevant from fission's standpoint.
You definitely should not have to do that. If you do still need
Having to dial just to reestablish lost connections seems like a workaround to the problem that should not be there in first place. Meaning that connection manager should just do it (and I know where is major overhaul is in progress there). It is also kind of an API I'm bit hesitant towards, because with node sharing across origins (or native support in browsers) it's going to require some extra caution. That said I would really like to support your effort in adopting shared worker. I think it would be best to move the piece of logic that ensures connections to the bootstrap nodes to the worker side, where you have full IPFS node available. If for some reason app logic needs to guide this process (meaning you need input from the app state to decide what to connect to) I would really like to understand that better. |
Definitely! The use case is a decentralised file system, with a public part and private/encrypted part. That file system is shared between applications which can store data on there. So like other file systems, you have files and directories, we need to know the creation/modification timestamps and the sizes of the files and directories. Apps have access to specific parts of the file system, and permission is given by the user (on that auth lobby we talked about earlier). If you'd like to know more, we have a whitepaper that explains it in full detail: https://whitepaper.fission.codes/file-system/file-system-basics Here's our app called Drive which you can use to browse through your entire file system.
Oh cool, didn't know that! Also saw in the code that the
Yeah, we need to list directory entries. So yeah, if you could add
For sure 💯 Good to know it's being worked on. In that case we won't need
Makes sense 👍 |
Just submitted pull request for |
@vasco-santos it would be great to get your input in regards to my comment about
|
Thanks for the heads up @Gozala Libp2p does not keep connections alive at the moment. This will be worked on the Connection Manager Overhaul as @Gozala mentioned. You can track the progress in the milestones table, there is one point for it. I expect this to land in |
@Gozala Thanks for adding Is there anything else that changed? Can't seem to do Details:
|
@icidasset I don't believe there has been any intentional changes, although changes in ipfs-core could cause some regressions which is maybe what happened 🤨 If so, it is surprising that no tests have caught this, so it would be good create minimal reproducible test case so we can add it to the suit.
Hmm so error in the screenshot seems to come from: js-ipfs/packages/ipfs-message-port-server/src/dag.js Lines 67 to 71 in 1311160
js-ipfs/packages/ipfs-message-port-protocol/src/cid.js Lines 39 to 40 in 1311160
Would be good to know which of those two lines are throwing so we know if it is
We don't support strings in js-ipfs/packages/ipfs-message-port-client/src/dag.js Lines 64 to 68 in 1311160
Are you by a chance providing js-ipfs/packages/ipfs-message-port-protocol/src/cid.js Lines 21 to 26 in 1311160
And render your cid instance with slate buffer, which is maybe what that error above is trying to say.
Do we have a bug report on file for the bcrypt install error ? If no can you please create one, having to resorting to workarounds is not the dev experience we'd like to have.
Could you please create reports for those too (and cc me) we should not have to resort to workarounds. For what it's worth we have one example that demonstrates message-port-server/client with shared worker and does not require such workarounds https://github.com/ipfs/js-ipfs/tree/master/examples/browser-sharing-node-across-tabs There is also another example that demonstrates use with service worker under review and that one also did not required workarounds I'm also curious what |
@Gozala It seems to be this line, if I pass a CID as a string to Object.setPrototypeOf(cid.multihash, Uint8Array.prototype) If I use a CID class instance, I'm not passing in the
No there isn't. I looked deeper into this and it isn't an issue with js-ipfs, but rather the new MacOS which shifted a few things around causing the bcrypt install to fail. Could also be my Nix setup though 🤔 I'll try to figure it out and make an issue if needed.
Good news, it looks like this has been fixed in the newer versions 👍 |
Oh I should also note, regarding this error
(without a That error seems to originate from: js-ipfs/packages/ipfs-message-port-server/src/server.js Lines 215 to 218 in bbcaf34
|
String CIDs aren't supported (See https://github.com/ipfs/js-ipfs/blob/master/packages/ipfs-message-port-client/src/dag.js#L64). My hope is we can migrate loosely defined types into more concrete in ipfs-core as well.
I have no way to explain why this would happen however unless somehow you end up with CID that doesn't have That is only thing I can think of that might explain it. Either way I would check if CID instance passed to If you are able to add a breaking test to the suite, that would greatly help as we can then reproduce the issue and fix it. That said I suspect there is something else going on in your setup because we have plenty of tests that exercise this. |
Ok that is interesting, but should be unrelated issue. It suggests that we are attempting to transfer same buffer multiple times & since we transfer everything from worker back to main thread anything returning same instance would exhibit that issue. It would be great to identify when that is the case however so it can be addressed. |
It just occurred to me that "duplicate of an earlier ArrayBuffer" problem might be causing the other issue. That is if that error occurs when we send some CID from the server, it probably ends up corrupt on the client. And if you then use that corrupt CID to do the |
@icidasset any chance we can narrow this down, so it could be reproduced ? |
I'll see if I can narrow it down 👍
|
Maybe this is the thing that happens here? What if I get a My case might also be a bit different than in the tests, because I'm using an |
If I wanted to write a test for this, how would I got about that? |
I think you're right, just was able to reproduce that
It is clear that something that server sends to the client refers to the same
Both would create cause this problem given that they would lead to duplicates.
My guess is it isn't iframe, it something about the dag nodes probably multiple links to the same node. I'm still surprised we end up with same CID instance instead of two equal ones, as I can't recall any caching logic that would do it, but that seems most likely problem. Unfortunately that does not at all explain why |
If you want to write the test that is exercised a across ipfs-core, ipfs-http-client, ipfs-message-port-client it's best to add it to interface tests so somewhere here: https://github.com/ipfs/js-ipfs/blob/master/packages/interface-ipfs-core/src/dag/get.js If it is specific to a specific package it can added into the package like ipfs-message-port-client more or less how ipfs-http-client does here https://github.com/ipfs/js-ipfs/blob/master/packages/ipfs-http-client/test/dag.spec.js |
@icidasset I have created separate issue about the transfer duplicate ArrayBuffer's that you have identified #3402. Would you be interested & have time to take a stab at it ? Should be fairly straight forward I think, mostly just replacing arrays with sets. |
I don't have that much time, but I'll see what I can do. |
I think this can be closed @Gozala The main issues we still have are to do with connectivity to our primary node. We often have to do a The storage quota is something I still need to look at. Did you have any plans for that? |
I'm glad to heard that! And thanks for contributing fixes!
Needing this workaround is unfortunate, but lets have separate issue / thread to figure out the fix there whether it's keepalive or something else. In the meantime I would suggest just moving that logic into worker thread, assuming there is no reason it must be in the main thread. If for some reason it must remain in main thread, lets discuss them so we can figure out a better way to go about it.
I have not really done much beyond just thinking about this. Which goes as:
|
We’re planning to use a shared IPFS worker on our webnative apps, so your data is available instantly across all those apps. We’ve added a HTML file and worker JS file to our auth lobby, so you can load that in using an iframe.
Staging version of that HTML file:
https://round-aquamarine-dinosaur.fission.app/ipfs.html
Ideally you won’t need to think about iframes at all, and the webnative sdk will handle that for you.
For this to work we would need a few extra methods on ipfs-message-port-client:
ls
, listing directory contents.object.stat
, getting the size of directories and files.swarm.connect
, connecting to peers.Are you planning on adding these methods?
Also posted to our forum https://talk.fission.codes/t/shared-ipfs-worker/996 cc @icidasset
The text was updated successfully, but these errors were encountered: