-
Notifications
You must be signed in to change notification settings - Fork 237
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark Runner results shown in viewer #390
Conversation
The goal is for this to be reused by the new benchmark runner. A benchmark results fields can specify how they are aggregated in which case their total/count/percent true/histogram will be displayed in the header. Or not in which case they will be viewable if you click through to the benchmark. TODO: actually use it in the benchmark runner. Have indentation/syntax errors be aggregated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like it, should help us understand the shoggoth a bit better.
</style> | ||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/styles/default.min.css"> | ||
<script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.9.0/highlight.min.js"></script> | ||
<script>hljs.initHighlightingOnLoad();</script> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like this is making booleans bold/green and strings and numbers red. I like having colors but I mistook this for indicating failure/success. Could it be different colors?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh yeah, that does sort of seem bad in the json code blocks where true mostly means error detected. I'll check. If it's hard I'll just turn off syntax highlighting for json since it doesn't add much anyway
if len(values) == 0: | ||
summary[name] = (0, 0) | ||
else: | ||
percent_set = len(values) / len(self.results) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if n/N
would be clearer than a percent? I had to guess what the two different percents meant my first time.
tests/benchmarks/benchmark_runner.py
Outdated
|
||
diff = repo.git.diff(start_commit) | ||
# Set syntax and response grade information | ||
diff = repo.git.diff() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
afaik this only saves edits, not new/untracked files. I ran into the same thing in #374 and added a helper func in git_handler. (I think this is why the code was missing in my earlier comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now i see you already caught it. I think stage/diff/unstage is the right approach too.
After this PR I'll edit the cron job to run the new benchmarks as well as all the javascript, python and clojure exercisms. When I do that I'll change the Benchmark summary to be saved as json, get a summary message from it and send it along with the link to slack. And also sync that json to its own bucket so it will be easy to parse old data and make graphs in the future. |
LGTM |
The BenchmarkResult class and it's viewer are made more general with the following changes:
Now if you introduce a new benchmark with a new property you can add the property to the BenchmarkResults class along with the appropriate metadata. This won't break previous benchmarks and if appropriate it is easy to set it for them.
misc:
todo: