You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
gh-23 on minification of JS seems to be a bit unecessary but its a good idea, for releases at any rate
gh-31 on syntax highlighting doesn't seem too useful, we don't have a whole lot of code snippets in the static site..
gh-39 is interesting, would be great to clean up the HTML, and move away from jQuery; that said it's pretty minimal as is, maybe updating bootstrap might be a better idea
gh-439 on having flexible thresholds for regression reports would involve moving the calculations or at-least some of the logic for the regressions to the JS side instead of on the Python side. The JS side is something I'm not familiar with. However, it might be useful since rebuilding the HTML (the only current possiblity) can be slow for large projects.
gh-519 is a symptom of more general selection issues (can be seen here too), selections are not preserved when switching between views. Not sure how much effort this would take to implement.
API Breaks
gh-191, requesting commit annoatations, is a good idea, but is an API break probably
gh-225 is also good, but is an API break and will break every downstream project
Environment related
gh-20 might be useful, but in general moving away from environment management would be nicer
gh-338, on caching and using builds on different branches seems like it wouldn't be worthwhile. It would be a new feature and an environment for a rather niche use case. Additionally, given the proliferation of build tools, staying further away from building things is better for asv
Misc
gh-57, is a good idea, but might be partially covered by pr-1253
gh-377 seem like a bad idea in that for benchmarking it isn't super useful to run concurrent loads in the first place. This might actually be closed out fairly easily by randomizing the environment names but it probably shouldn't be done.
gh-523 is related to pr-1248 conceptually, and deals with replacing virtualenv with venv. Might be useful, but also the virtualenv dependency isn't particularly onerous.
gh-510 might not be worthwhile as it would extend the surface area of asv considerably, even if we delegate to say, fireworks
gh-446 on sequential benchmarks, might be nice to document / implement formally
CLI enhancements
gh-551 , on skipping steps when a step range is applied is a straightforward enhancement of the existing commands, and should be implemented. The current behaviour is unintuitive to say the least.
gh-370, in particular this comment, for comparing between python versions, and can be seen as a direct improvement for environment syntax; though pr-352 is related, and a partial implementation of something similar is in pr-789
gh-550 is another almost bug, when there are no new commits the exit status should indeed not be non-zero
gh-273 on passing daily to --steps would be interesting but it might not be worthwhile, a lot of the code (right from the parser) assumes steps is a positive integer. Seems like something which can be easily handled in a pre-asv step (choosing a git commit for each day), some documentation would suffice.
gh-518 on regex selection for the tags would be useful, but perhaps it is more of a documentation update. Tags can be filtered and then their commits can be used to emulate the feature requested.
gh-488 on adding ALL_TAGS would be useful, though emulation should be simple since pr-1227
gh-483 on adding more commands to list and manipulate benchmarks might be worthwhile
gh-450 on having plain-text views on regressions would also be useful perhaps, now in markdown since tabulate is part of the package
Parameterization, Benchmark discovery, Setup
gh-179, gh-543, gh-482, being able to run multiple types of benchmarks, ties into the parameterization requests, running multiple benchmarks is also something the pytest style parameterization should fix. The current formulation isn't flexible enough, what is required is a decorator which will generate instances of each kind of benchmark
gh-567, gh-481 concerning interop with pytest-benchmark and in general better benchmark discovery seems like a good direction to work on in parallel to the parameterization
gh-478 on comparative benchmarks for similar functionality seems worthwhile to at-least document with class decorators / metaclasses, might even be good to implement, closely related to parameteriztation
gh-563 on recognizing old benchmarks after changes to asv.conf.json is very relevant, and should be fixed internally; the existing workaround should be documented
Bugs
gh-333 is probably still present (no windows machine to test), but it seems like that particular combintation is perhaps now obsolete hopefully?
gh-375, gh-918 are bugs related to the progressbar implementation in asv. Might be easier to delegate that entire section of the code to tqdm, the progressbar library.
This is essentially a triage of every open issue and pull request, along with suggestions (personal).
Issues
Close?
Web Related
bootstrap
might be a better ideaAPI Breaks
Environment related
asv
Misc
virtualenv
withvenv
. Might be useful, but also thevirtualenv
dependency isn't particularly onerous.asv
considerably, even if we delegate to say,fireworks
CLI enhancements
daily
to--steps
would be interesting but it might not be worthwhile, a lot of the code (right from the parser) assumessteps
is a positive integer. Seems like something which can be easily handled in a pre-asv
step (choosing agit
commit for each day), some documentation would suffice.regex
selection for the tags would be useful, but perhaps it is more of a documentation update. Tags can be filtered and then their commits can be used to emulate the feature requested.ALL_TAGS
would be useful, though emulation should be simple since pr-1227markdown
sincetabulate
is part of the packageParameterization, Benchmark discovery, Setup
pytest
style parameterization should fix. The current formulation isn't flexible enough, what is required is a decorator which will generate instances of each kind of benchmarkpytest-benchmark
and in general better benchmark discovery seems like a good direction to work on in parallel to the parameterizationasv.conf.json
is very relevant, and should be fixed internally; the existing workaround should be documentedBugs
asv
. Might be easier to delegate that entire section of the code totqdm
, the progressbar library.Design Decisions
The text was updated successfully, but these errors were encountered: