-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automation of performance measurements #212
Comments
I would like to focus on the |
For context - UAST perf measurements on gitbase side src-d/gitbase#606 hit #209. Would be nice to try to generate at least similar load in our baseline and see how much it can be stretched from there. |
The new SDK (v2.12.0+) will generate a benchmark report ( I will now update all drivers that include benchmark fixtures (many thanks to @tsolakoua!). It won't be enabled in CI for obvious reasons (shared instances), so we still need some infrastructure to run it. |
Next Monday is OSD and I could continue on that since I finished with the benchmark fixtures. However, I don't understand well the next steps so I might need some support to get started. |
\cc @smola as AFAIK he was working on some Jenkins setup |
Watch https://github.com/src-d/backlog/issues/1307 It will be guided by |
We already have the Jenkins deployment, soon you'll have the borges pipeline as an example for you to develop your own. |
Linking in some instructions on using Jenkins for perf testing https://src-d.slack.com/archives/C0J8VQU0K/p1544633659068100 |
@lwsanty will continue to work on this, as discussed on Slack. Specifically, we have a set of Go benchmarks in each driver which can be run using I think it might be a good first step to setup our Jenkins instance to run these Go benchmarks for each driver either every few days or on each commit to the driver's |
According to prev comment. I propose to achieve this in the same way as it was done in borges, regression-borges
@smola @dennwc @bzz |
Overall looks good!
JFY repository creation, as well as other ACL bits are handled by Infra where appropriate issues has to be filed, as soon as there is a consensus. Before doing that, shall we briefly discuss what kind of performance regression dashboard do we want to have at the end? E.g from the proposal on repository naming above, I figure that we are talking about individual driver "internal" performance benchmark. I think it would be really useful is to include next things in the same dashboard:
May be this would require turning current issue into and ☂️ and handle each of those individually though a new smaller issues in the order of priority. I belive this way, all these may live in a same repository e.g Last but not least - for me, notifications are much less of the priority, comparing to having such a "dashboard". Given the requirements above, I'm not sure how much of the Also afaik And for 2-4, I'm not 100% sure but thinks we might be able to re-use some of the prior work e.g:
@dennwc @creachadair WDYT? BTW, may be it will be productive to schedulle a quick call about this at some point. |
👍 for scheduling a call.
Agree about the notifications - they are not that important. The MVP for me is a dashboard with For the dashboard itself, I'm not sure what is considered a "standard" right now, but I definitely don't want Jenkins dashboards - those are static and ugly. I also propose to use a pair of Grafana + Influx/ES/whatever if there are no better options. Grafana also provides "alarms" so we can setup notification later (if needed). Re "single dashboard from multiple tools", as @lwsanty mentioned, we may need to consult with Infra team to know if we can reuse our Grafana instance in the pipeline cluster. We may need a separate one because of the isolation between clusters. |
This is an umbrella issue for initial work on automating performance analysis/regression suite for bblfshd, to build a baseline benchmark.
Motivation (things reported to be slow):
TODOs:
recommended
driver (1 same program from RosetaCode?) Dataset for automation of performance measurements #220bblfshd
/individual driver, STDIO: native parser)Each of the items above is expected to be taken care of as a separate Issue/PRs (by different authors).
As this is initial round of work on performance, there are no expectations on completeness of the test cases - it's rather important to have all prices in place and infrastructure up and going.
The text was updated successfully, but these errors were encountered: