Silo is a storage primitive designed to support live migration.
One of the core functionalities within Silo is the ability to
migrate/sync storage to various backends
while it is still in use (without affecting performance).
All storage sources within Silo implement storage.StorageProvider
. You can find some example sources at pkg/storage/sources.
If you wish to expose a Silo storage device to an external consumer, one way would be to use the NBD kernel driver. See pkg/expose/sources.
When you wish to move storage from one place to another, you'll need to specify an order. This can be dynamically changing. For example, there is a volatility monitor which can be used to migrate storage from least volatile to most volatile. Also you may wish to prioritize certain blocks for example if the destination is trying to read them. pkg/storage/blocks.
Migration of storage is handled by a Migrator
. For more information on this see pkg/storage/migrator.
Example of a basic migration. Here we have block number on the Y axis, and time on the X axis. We start out by performing random writes to regions of the storage. We start migration at 500ms and you can see that less volatile blocks are moved first (in blue). Once that is completed, dirty blocks are synced up (in red).
This example adds a device reading from the destination. The block order is by least volatile, but with a priority for the blocks needed for reading. You can also see on the graph that the average read latency drops as more of the storage is locally available at dest.
Same as above, but with concurrency set to 32. As long as the destination can do concurrent writes, everything will flow.
Bug reports and pull requests are welcome on GitHub at https://github.com/loopholelabs/silo. For more contribution information check out the contribution guide.
The Polyglot project is available as open source under the terms of the AGPL, 3.0 License.
Everyone interacting in the Silo project’s codebases, issue trackers, chat rooms and mailing lists is expected to follow the CNCF Code of Conduct.