You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be really nice if IPFS can support using more than one datastore at the same time for storing blocks. For example to have a datastore for an active the cache and another datastore stored for more permanent data, perhaps on a read-only filesystem. Some form of multiple datastore support is required to support the filestore I am working on in pull request #2634 (towards issue #875).
There are many open issues on how to handle this. The point of this issue is to open a discussion. I intend to implement something once there is some sort of agreement on the semantics.
Assuming this is something that is wanted, let's start of the discussion with this:
How should the pinner and garbage collector interact with multiple datastores? As I see there should be a designed datastore for the cache and the garbage collector should only work on that datastore. It should ignore blocks on other datastores with the possible exception of reading blocks from them to resolve recursive pins. Blocks in all other datastores should be considered implicitly pinned.
To support the view of one datastore being the cache. New blocks should be written to the cache by default and explicit API calls should be made to add blocks to other datastores or move blocks from the cache datastore to other datastores.
Thoughts?
The text was updated successfully, but these errors were encountered:
Thanks, @whyrusleeping. Form what I understand, we have the infrastructure in place for using different datastores based on the key prefix. What I am proposing is to allow for multiple datastores under the "/blocks" prefix. That is what I need for the filestore. For reading each datastore is looked up in sequence until the block is found.
Think of it something like UnionFS for the IPFS datastore.
It would be really nice if IPFS can support using more than one datastore at the same time for storing blocks. For example to have a datastore for an active the cache and another datastore stored for more permanent data, perhaps on a read-only filesystem. Some form of multiple datastore support is required to support the filestore I am working on in pull request #2634 (towards issue #875).
There are many open issues on how to handle this. The point of this issue is to open a discussion. I intend to implement something once there is some sort of agreement on the semantics.
Assuming this is something that is wanted, let's start of the discussion with this:
How should the pinner and garbage collector interact with multiple datastores? As I see there should be a designed datastore for the cache and the garbage collector should only work on that datastore. It should ignore blocks on other datastores with the possible exception of reading blocks from them to resolve recursive pins. Blocks in all other datastores should be considered implicitly pinned.
To support the view of one datastore being the cache. New blocks should be written to the cache by default and explicit API calls should be made to add blocks to other datastores or move blocks from the cache datastore to other datastores.
Thoughts?
The text was updated successfully, but these errors were encountered: