-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Badger2 integration #9230
Badger2 integration #9230
Conversation
Thank you for submitting this PR!
Getting other community members to do a review would be great help too on complex PRs (you can ask in the chats/forums). If you are unsure about something, just leave us a comment.
We currently aim to provide initial feedback/triaging within two business days. Please keep an eye on any labelling actions, as these will indicate priorities and status of your contribution. |
This is working for me so far and could use more testers. I haven't experimented with Badger 3 yet, but this should work for larger IPFS storage that needs reliable garbage collection as flatfs is at the mercy of your OS filesystem's ability to handle small files and your kernel's ability to handle the flood of syscalls... |
@feld CI is not passing try running |
@feld Would LOVE this! Please please please clean up the branch and save the world by finally having a performant DS in IPFS! |
Hi, having this would be super awesome! |
It would be great to see this merged! |
How about Badger2's GC performance |
it's working for me, but definitely needs more testers |
This is a provisional PR and should not be merged until dependencies are merged into their master branches. - Add badger2 plugin - Update docs to describe badger2 - Add tests for badger2 - Add options to configure compression and blockCacheSize - Document vlogFileSize config param - Update go-ipfs-config - Update go-ds-badger2
…tabases > 1TB Usenet Express uses badger with >300TB of data in 2018 http://bos.itdks.com/a121f6647d6042989fb9e76fa40a03f4.pdf
We were opening the database and telling it to keep infinite copies by not defining a value.
e90e282
to
5d5fb41
Compare
Soon(tm) we'll be bumping our kubo built at ipfs-search.com. Once that's gathered a bit of data, I could give this a spin and see what it does in production. ETA ~2 weeks. |
I actually think IPFS should be using MDBX instead of something like Badger as the default storage engine. I'm quite surprised nobody has gone down that rabbit hole yet. https://github.com/torquem-ch/mdbx-go https://twitter.com/deplinenoise/status/840453414546898944 https://gitflic.ru/project/erthink/libmdbx#improvements-beyond-lmdb And then if you need to scale REALLY big it should be some object storage backend like CEPH maybe someone with more free time will pull on this thread for me |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for submitting this PR.
Before one of Kubo maintainers reviews this, I need to ask two questions:
- Is Badger 2 supported upstream at all? IIUC the last time (~2020?) we've reported issues upstream they were closed and we were asked to upgrade to Badger 3
- Is anyone maintaining badger in general? AFAK entire(?) team left DGraph in January: https://discuss.dgraph.io/t/quitting-dgraph-labs/16702
Merging unsupported datastore is not something we want, so team is unlikely to invest time in reviewing this until we have confidence this won't become dead on arrival.
Hope you understand our hesitance.
I actually think IPFS should be using MDBX instead of something like Badger as the default storage engine
Looking at alternatives to badger that are actively maintained sounds sensible. Mind opening a dedicated issue for discussing this? (ideally with working code and benchmarks that show which workloads would benefit from it over flatfs, but a stub issue with links you provided is ok too).
Let's keep this PR about Badger 2.
I don't like the fact that Kubo makes it hard for you to use whatever datastore you want, and that Kubo maintainers have to act as gatekeepers for this, which is why I wrote https://github.com/guseggert/go-ds-grpc. I have a PR here to add it to Kubo, but it has failed to gain traction (and in retrospect I'd bias towards HTTP instead of gRPC). If a mechanism like this were added in Kubo, allowing you to download or build an out-of-process datastore, such as Badger 2, and wire it up to a vanilla Kubo binary with config, would that suffice for your use cases? The wiring is slightly more complicated, but it would let folks distribute datastores for Kubo without having to fork it, make a plugin, or convince Kubo maintainers to support it. (I'm not saying we won't accept this or support it, just offering an alternative that would solve this problem more generally.) |
@hsanjuan started work on badger3: https://github.com/ipfs/go-ds-badger3/ (and http://github.com/ipfs/go-ds-pebble) which sounds better than [no longer maintained] 2. Filecoin also made work on datastores, what would be awesome is some testsuite that accepts a datastore interface and benchmark all of the features (we might actually have code for this somewhere, probably on the filecoin side). But given than neither of them (badger 2 or 3) is maintained I don't think we will be adding them, this is easy to ship as a plugin. |
Note badger3 is mantained, they had a release some weeks ago: https://github.com/dgraph-io/badger/releases Unfortunately I don't think the "come back with benchmarks" is a useful approach. Badger is optimized for SSDs. Both Badger and Pebble have lots of options that control many performance factors depending on things like available memory, need for compression etc. One big problem in Kubo is that badger options cannot be tuned, and defaults are specially bad for larger scale (they work on a 1GB raspberry pi though). The underlying filesystem also matters, and the disk layout too (some datastores may work better with layouts in which writes are sharded across disks etc). In the case of flatfs, it is generally very slow, specially with ext4, but faster with XFS. Some datastores could be optimized for spinning disks, rather tan SSDs. What benchmarks show is that badger cannot be used at scale (I mean, don't need a benchmark for that). Node restarts takes multiple minutes at certain scales. Also compaction don't work well at all in Badger v1, which means a lot of disk usage overhead. But perhaps badger3 with the right tuning works much much better, and perhaps Pebble even more. Another issue is that Blockstore introduces caching and bloomfilter layers, but Badger3/Pebble already include those, so they would probably be doing things even worse. Ultimately, things depend a lot on usage patterns (read vs writes, total amount of blocks, block-sizes etc). Badger v3 includes everything under 1MB in the LSM tree (memory) by default. That means someone using 1MB ipfs blocks is going to get a completely different behaviour than someone using 256KB blocks. Even worse, badgerv1 used 32 byes for this, which means even small things like CIDs go into disk. What I mean is that the only way of making Kubo-storage not suck right now (it is really at a very bad place) is to add support for additional stores (badger3, pebble, potentially bolt), along with control of all configuration values, so that people can actually test different configurations. Whether that's via plugins or core support is up to you, but it is very low hanging fruit to just "pass". This is a nice experiment, but the network/serialization overhead here would kill performance. Some use-cases may not mind, but definitely not for heavy use like other backends could provide. I would add it nevertheless as it allows remote backends etc. On another note, there is also not-so-low-hanging fruit: blockstore implementations like https://github.com/ipld/go-storethehash, sqlite-backed storage ala https://github.com/whyrusleeping/whypfs with refcounting etc. these do require time and more refactoring. |
@hsanjuan thx for your answer,
I've jumped to conclusion too quickly, but looking at the releases it seems quite light: Nov 4:
Oct 15:
Looking at the code frequency we can see a clear shutdown early 2021: https://github.com/dgraph-io/badger/graphs/code-frequency The project being maintained isn't a key point (we use badger1 after all), but AFAIT this isn't a project we can add now and iron later, so we should be more carefull when adding it.
My understanding is that for now badger2 and badger3 datastores are experiments / toes in the water, I hope that we (Kubo maintainers) don't need to get pulled in for this kind of work.
This sounds awesome, but I think this shouldn't involve us. (see point above) On an other note, we don't have any good visibility on how bad the issue is, I mean I know I'm using Kubo every days, but flatfs and badger are fine enough for my hundreds of GiBs workflow on my machine with too much ram, I know there is work in different places mainly around supporting bigger sizes, but I havn't been able to find an issue summering the issue, if you know of one or some other docs it would be nice so I could raise this at out triage meetings and we could place it on our board of priorities. |
@Jorropo you mean Kubo maintainers should not be involved in doing benchmarking or something else?
How, whats on your mind? Note that benchmarking was done early on Lotus development and that's how Badger2 was chosen for Lotus. Badger2 handles huge datastores there. Even if badger is no longer heavily maintained, that doesn't mean it is bad more many use-cases (better GC is what many people complain about and that should be way better in Badger3). Pebble is a more open question, as it is heavily maintained so it probably has improved since it was tried out. Unfortunately the benchmarks done back then (2 years ago) may have little meaning now, given how codebases and hardware improve etc. @ajnavarro has some recent benchmarks that can give an idea of how things are today, but in the end I think it all depends on usecases. |
Supercedes #7705