-
Notifications
You must be signed in to change notification settings - Fork 783
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reliable FCS benchmarks which run on every release #13606
Comments
Nice to see this getting traction 👍 FYI I created https://github.com/safesparrow/fsharp-benchmark-generator for automating FCS testing. I'd be more than happy to help either extend that or work on an equivalent, more official tool. I think in general a separate repository with benchmarks is better than putting them inside the main repository - similar to the approach taken in dotnet/performance. Also I think it would be nice if whatever benchmarking tools are created are available for local runs and not just CI. |
I created some parsing and type checking benchmarks on FSharpPlus and FsToolkit.ErrorHandling which take reasonable amount of time but should also hopefully catch most regressions. They can be run locally, with local build of FSC plugged in via project file. I'll think about making that easier somehow. |
@0101, given that your PR in dotnet/performance has been merged for 2 weeks now, can we see some results, graphs? Does it run regularly yet? |
@kerams it should be running now, and data being collected. Still have to figure out how to get to them. |
Ping |
Latest results are from November last year, it seems it doesnt run anything, and no alerts were produced. We're mostly using local machines not run before/after. No data for other (non-F# types of runs either). Not sure if it was migrated somehwere @DrewScoggins? |
We need to come up with reliable FCS benchmarks, and run them as part of
dotnet/performance
, ideally every release (on every signed build for current release branch).Some ideas for scenarios we want to test:
Some metrics we are interested in: time spent in scenarios, allocations (how much do we promote, is there anything ending up in LOH, etc), time spent in GC, etc.
More info and context: #12526 & #13419
The text was updated successfully, but these errors were encountered: