-
Notifications
You must be signed in to change notification settings - Fork 24
Project Gutenberg #29
Comments
This is just an |
@rht is there enough free disk space on Pollux? |
(didn't check) |
https://www.gutenberg.org/wiki/Gutenberg:Mirroring_How-To says it is at least 650 GB (could have been doubled). But anyway, the mirroring is a one-liner. |
@rht Yeah, what makes this difficult is the amount of disk space - I don't think many people have that amount of space lying around for this. It's been suggested by some people to shard the collection and just make sure people hosting those bits keep their things in sync independently. There's also been talk about this tool: ipfs/notes#58 |
We could also just pitch in x amount for an Amazon instance (or some other host) of that amount, and just pay that? Or I could see if I can figure out my raspberry pi, and attach a TB to it. |
Hmm, rsync doesn't have seek so at least the first 'download -> hash' needs the TB storage to contain it. Either
For now, to do partial backup, |
(and both storage came from amazon) |
|
@jbenet @lgierth SEND MORE DISKS... Also see ipfs/infra#89 |
@rht Yeah, what I really want to do is have a "click to pin" button on the archive homepage, people select how much storage they want to donate, and the tool randomly selects an appropriate subset of the least-redundant blocks and pins them to the local daemon. CC: @whyrusleeping Edit: see ipfs/notes#54 |
that would be cool. could have our service enumerate providers for each block under a given archive root, then assign blocks with the least number of providers to the next person who requests. |
Should be normalized based on the blocks demand curve. |
We can get more storage nodes, if necessary |
The first thing I mirrored to IPFS was a small subset of Project Gutenberg, so I'm definitely interested in getting the whole thing into IPFS, as both @rht (#14) and @simonv3 (https://github.com/simonv3/ipfs-gutenberg) have suggested.
Making an issue to coordinate this.
The text was updated successfully, but these errors were encountered: