Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Guide on writing benchmarking tasks #7

Open
lorentey opened this issue Apr 16, 2021 · 3 comments
Open

Guide on writing benchmarking tasks #7

lorentey opened this issue Apr 16, 2021 · 3 comments
Labels
documentation Improvements or additions to documentation

Comments

@lorentey
Copy link
Member

We need a chapter 2 of the documentation that's all about writing benchmarks.

The addSimple method demonstrated in chapter 1 has no support for omitting parts of the code from the benchmarking scope, so only a fraction of benchmarks can use it.

The real add method has support for adding setup code that isn't measured (both code that needs to be only run once per each input, and code that needs to be repeated before each measurement), but it is way too subtle to use without adequate documentation.

@lorentey lorentey added the documentation Improvements or additions to documentation label Apr 16, 2021
@heckj
Copy link
Contributor

heckj commented May 11, 2021

I was just looking at this (and the code) to try and sort it out. I've got that you'll want to register the input types for the function with registerInputGenerator, but I'm not entirely clear on the add() and it's overlap with Task, and an optional return of Benchmark.TaskBody.

Do you have a simple example using the add that I could build from? I'd be happy to draft a bit of content.

@DivineDominion
Copy link

I'm officially over being ashamed that I couldn't figure out how to use this, and am now chiming in to express a similar need:

My trouble going in was that I couldn't understand how input: ([Int], [Int]).self operates. I couldn't find any generators and jumping along symbols didn't help. The gap in my understanding that documentation could bridge is a short high-level explanation of the components involved to customize benchmarks. The Task type performs some work, but the actual work being done didn't reveal itself to me, so where should one look?

(After playing with this for an hour or so I'm not confident that I can benchmark insertion into ordered and unordered collections with the tools given to me; like @heckj mentioned add, I'm experimenting with the Set<Int> contains example to adjust things but it's all very superficial)

@heckj
Copy link
Contributor

heckj commented May 3, 2024

The code that provides those (built-in) generators is https://github.com/apple/swift-collections-benchmark/blob/main/Sources/CollectionsBenchmark/Benchmark/Benchmark.swift#L24-L35

They're set up create one or two sets of integers, mostly shuffled to be able to easily "work" ordered insertion algorithms. They're passed through Task (same file, just quite a bit lower) into whatever your benchmark code needs, and gives you options to test just insertion, deletions, moves, re-ordering, etc.

I hope Karoy will correct me if i'm wrong, but the kit is all about two-dimension benchmarks, the measurements are always about "performance over the size/number of insertions/number of deletions/etc" - so if you're after a 1 dimensional benchmark (how long does X function take without regard to a backed collection), this isn't the right library to use. In those cases, I'd highly recommend https://github.com/ordo-one/package-benchmark - which likewise would be terrible for the use cases that this library excels at.

I got quite a bit more out of reading the actual tests in swift-collections (ex: https://github.com/apple/swift-collections/blob/main/Benchmarks/Sources/Benchmarks/DequeBenchmarks.swift) - hopefully that helps ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

3 participants