Skip to content
This repository has been archived by the owner on Aug 2, 2021. It is now read-only.

Planning & Progress #1

Open
7 of 11 tasks
magik6k opened this issue Nov 21, 2018 · 4 comments
Open
7 of 11 tasks

Planning & Progress #1

magik6k opened this issue Nov 21, 2018 · 4 comments

Comments

@magik6k
Copy link
Member

magik6k commented Nov 21, 2018

This is a meta issue to document what is happening in this repo and what are future plans (also a good place to discuss them!).

@schomatis
Copy link

@magik6k Regarding the OKR Run datastore benchmark/test suite wrapping Badger and other storage options (that I'm responsible for), could you help me plan what would be needed to accomplish it?

@magik6k
Copy link
Member Author

magik6k commented Dec 10, 2018

Yeah, I think those two (mine being Common datastore benchmark suite in the TB range that tests ipfs requirements for datastore) are more-or-less equal in goals are overlapping a fair bit.

I think the idea was that I'm responsible for creating and possibly running large scale benchmarks, and your OKR was mostly for a common test/bench suite (for e.g. in go-datastore) that could be called by datastore implementations with something like dstest.Run(t, newDsFunc). cc @momack2 does that make sense?

I don't think it will be possible to merge benchmarks from here and other places as benchmarks here carry a lot of case specific support code which makes sure os/hardware performs at least somewhat predictably

@schomatis
Copy link

Oh, thanks for the clarification, then I made the mistake of delaying my work waiting on these benchmarks (thinking that one builds on top of the other). I'll see what I can find in go-datastore and I'll ping you for help if you have time.

@momack2
Copy link

momack2 commented Dec 13, 2018

IIRC:

  • KR.1 was around having a generic benchmark suite that enumerated and tested everything we expected IPFS to need from a datastore so that we could run it against many possible solutions to evaluate how well they met our needs. This requires defining the areas we care about (and to what extent) as well as actually writing the benchmarks.
  • KR.4 was to fuzz/test our potential datastore options through continuous integration (to gain visibility on bugginess, etc)
  • KR.2&3 were about specifically evaluating Badger vs our other options and making a go-nogo decision in the short term so we can move ahead on improving the best course ASAP (aka quantifying how buggy Badger is vs our other options, how fast compared to all alternatives, etc). If we can do that via a common test/bench suite, that's great - otherwise I'd imagine it would have some overlap with KR.1 in designing/creating new benchmarks or tests that we'd want to run in a more continuous fashion (via KR.4). However, I'd imagine a lot of that work (defining what to test) is already encoded in @magik6k's work and can be reused.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants