You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Swarm DB Data Pipelines for long-running, intensive storage tasks
Summary
Imagine a "data dex", where you ask the dex "Move up to 1 MB of Filecoin data to Swarm and update cid references", and a set of smart contracts, routers, oracles and bridges gives a single quote and estimation of a work order. Here we define how this is a potential paradigm and revolutionary idea when used in conjuction with the forthcoming Swarm DB.
Guide-level explanation
Similar to ZigZag Arweave Bridge, Swarm DB Data Pipelines is agnostic in nature, where you choose the data storage provider from and to and the fungible token to use for payment.
The actors in this use case are the following:
Dex (Decentralized Exchange)
A set of dexes which maintains an Automated Market Maker with support for the decentralized storage chain token.
Eg. Quote 100 DAI of BZZ
Storage Allocation contract
A set of oracles or smart contracts to query storage allocation.
Eg Quote 1MB of storage in Filecoin and Arweave
Swarm DB
Part of Swarm DB features, sends a query for job / work orders computation estimates and execute them.
Eg. Quote 256 MB memory, 1 GB bandwidth and 20 CPU shares*
*=WASM engine must be able to calculate based on memory, CPU and bandwidth.
Destination chain
A chain which is compatible with multiformats / IPLD.
Routers and/or bridges
Technical Implementation
We have an use case where we need to stored existing data in Filecoin, just a portion of it, and that it maintains linkability, that is, the set of CARs/CIDs copied to Swarm or any other IPLD-compatible bridge, can be reference accordingly or traverse.
An implementation that satisfies this use case requires:
Select Filecoin CIDs to move
User selects a set of CIDs and CAR is created and stored in Swarm.
CAR is send to a Swarm DB Job Work Order for Quotation
A computation query is required for cost planning.
CAR is send to a Filecoin Job Work Order for Quotation
A computation query is required for cost planning (to update cid references)
Dapp receives cost planning and quotes in selected fungible token for dex prices
Costs are display in user's selected fungible token
Dapp sends cost planning quote to Swarm DB router network for availability
A router asks register Swarm DB instances for quotation of work orders
Dapp approves and send work orders to Swarm DB node
One or more work orders, are sent to the available Swarm DB Node.
Storage allocation is purchased, and computation is executed
Required postage stamp batch IDs, tokens and gas is purchased
CAR is send to work order node for computation
Node fetches CAR file and converts it to beeson, which then are stored in bee including inclusion proofs.
Once complete or rejected, a notification is sent and emitted by a smart contract
Dapp is notified of changes.
If succesful, a claim must be executed to close the work order, or retry/refund
To finish work order, user claims work order and a signature is attached to the work order which certifies acknowledge by the
user.
Swarm DB Data Pipelines
Swarm DB Data Pipelines for long-running, intensive storage tasks
Summary
Imagine a "data dex", where you ask the dex "Move up to 1 MB of Filecoin data to Swarm and update cid references", and a set of smart contracts, routers, oracles and bridges gives a single quote and estimation of a work order. Here we define how this is a potential paradigm and revolutionary idea when used in conjuction with the forthcoming Swarm DB.
Guide-level explanation
Similar to ZigZag Arweave Bridge, Swarm DB Data Pipelines is agnostic in nature, where you choose the data storage provider from and to and the fungible token to use for payment.
The actors in this use case are the following:
Dex (Decentralized Exchange)
A set of dexes which maintains an Automated Market Maker with support for the decentralized storage chain token.
Eg.
Quote 100 DAI of BZZ
Storage Allocation contract
A set of oracles or smart contracts to query storage allocation.
Eg
Quote 1MB of storage in Filecoin and Arweave
Swarm DB
Part of Swarm DB features, sends a query for job / work orders computation estimates and execute them.
Eg.
Quote 256 MB memory, 1 GB bandwidth and 20 CPU shares
*Destination chain
A chain which is compatible with multiformats / IPLD.
Routers and/or bridges
Technical Implementation
We have an use case where we need to stored existing data in Filecoin, just a portion of it, and that it maintains linkability, that is, the set of CARs/CIDs copied to Swarm or any other IPLD-compatible bridge, can be reference accordingly or traverse.
An implementation that satisfies this use case requires:
Select Filecoin CIDs to move
User selects a set of CIDs and CAR is created and stored in Swarm.
CAR is send to a Swarm DB Job Work Order for Quotation
A computation query is required for cost planning.
CAR is send to a Filecoin Job Work Order for Quotation
A computation query is required for cost planning (to update cid references)
Dapp receives cost planning and quotes in selected fungible token for dex prices
Costs are display in user's selected fungible token
Dapp sends cost planning quote to Swarm DB router network for availability
A router asks register Swarm DB instances for quotation of work orders
Dapp approves and send work orders to Swarm DB node
One or more work orders, are sent to the available Swarm DB Node.
Storage allocation is purchased, and computation is executed
Required postage stamp batch IDs, tokens and gas is purchased
CAR is send to work order node for computation
Node fetches CAR file and converts it to beeson, which then are stored in bee including inclusion proofs.
Once complete or rejected, a notification is sent and emitted by a smart contract
Dapp is notified of changes.
If succesful, a claim must be executed to close the work order, or retry/refund
To finish work order, user claims work order and a signature is attached to the work order which certifies acknowledge by the
user.
Glossary
CAR
: Content Addressable aRchiveCID
: Content IDentifierDEX
: Decentralized ExchangeAMM
: Automated Market MakerIPFS
: Interplanetary File SystemSwarm
: Ethereum Swarm Decentralized Content Storage NetworkDApp
: Decentralize ApplicationIPLD
: Interplanetary Linked DataFilecoin
: Proof of Storage and Space decentralized content storage chainWASM
: Web AssemblySwarm DB
: Decentralized database using Swarm Bee as store and a set of smart contracts and L2 Nodes which offer a database-like experienceCopyright
Copyright and related rights waived via CC0.
Author
@molekilla (Rogelio Morrell)
The text was updated successfully, but these errors were encountered: