-
Notifications
You must be signed in to change notification settings - Fork 12.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added run_flags directive to compiletest
#13553
Conversation
// option. This file may not be copied, modified, or distributed | ||
// except according to those terms. | ||
|
||
/// Completely miscellaneous language-construct benchmarks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think that the tests in this directory have --test
passed when compiled or --bench
passed when run, can you verify that these tests are run when you run this test?
You can use something like make check-stage1-bench TESTNAME=basic
to run this test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It doesn't work... I even get an error:
running 1 test
test [run-pass] bench/basic_benches.rs ... FAILED
using metrics ratchet: tmp/check-stage1-T-x86_64-unknown-linux-gnu-H-x86_64-unknown-linux-gnu-bench-metrics.json
result of ratchet: 0 metrics added, 0 removed, 0 improved, 0 regressed, 0 noise
updated ratchet file
failures:
---- [run-pass] bench/basic_benches.rs stdout ----
error: compilation failed!
command: x86_64-unknown-linux-gnu/stage1/bin/rustc /home/aochagavia/Desktop/rust/src/test/bench/basic_benches.rs -L x86_64-unknown-linux-gnu/test/bench --target=x86_64-unknown-linux-gnu -L x86_64-unknown-linux-gnu/test/bench/basic_benches.stage1-x86_64-unknown-linux-gnu.libaux -C prefer-dynamic -o x86_64-unknown-linux-gnu/test/bench/basic_benches.stage1-x86_64-unknown-linux-gnu --cfg rtopt --cfg debug -O -L x86_64-unknown-linux-gnu/rt
stdout:
------------------------------------------
------------------------------------------
stderr:
------------------------------------------
error: main function not found
error: aborting due to previous error
------------------------------------------
task '[run-pass] bench/basic_benches.rs' failed at 'explicit failure', /home/aochagavia/Desktop/rust/src/compiletest/runtest.rs:963
failures:
[run-pass] bench/basic_benches.rs
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured
Would this be fixed if I included a main function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have included a main function that does nothing (fn main() { }
) and now it says that the test "passed". However, it looks like it is not benchmarking anything. I have looked to the other benchmarks in the directory and none of them uses the Bencher. If you take a look at (for example) std-core.rs
, you will be able to see what I mean.
Do you have any idea on how to fix this?
Current output after adding an empty main function:
running 1 test
test [run-pass] bench/basic_benches.rs ... ok
using metrics ratchet: tmp/check-stage1-T-x86_64-unknown-linux-gnu-H-x86_64-unknown-linux-gnu-bench-metrics.json
result of ratchet: 0 metrics added, 0 removed, 0 improved, 0 regressed, 0 noise
updated ratchet file
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently you can specify flags to be passed to the compiler (you would need to pass --test
), but there is no way to specify flags to the invocation of the binary (you would need to pass --bench
).
To move these tests, I think a new run-flags
directive will need to be added to src/compiletest
in order to pass --bench
to the binary itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand what you mean by run-flags
. I couldn't find any reference to it in the manual or in the code self. However, I have taken a look at src/compiletest
and, if I understand correctly, the benchmarks are run as if they were tests (maybe I am wrong, the code has almost comments so I was on my own).
Any ideas on how to go further? I am stuck...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently there are compile-flags
directives at the top of tests which instruct compiletest to pass certain flags to the compiler. You'll need to add a run-flags
directive which dictates flags to pass to the binary which gets run.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand what you mean by directive. I expect to find something like #[compile-flags(something)]
, but I don't.
I have made some changes to make it look like the other benches in the directory. Then I tried using make check-stage1-bench
. This time the benchmark was run.
I have amended the commit and pushed again, so you can see the changes. Please tell me if I am moving in the right direction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By directive I mean the top of the test file which looks like:
// compile-flags: --test
// aux-build:foo.rs
(there are many other directives). You've made it such that this code is run, but it is no longer run in the benchmark harness so we're no longer getting any numbers relevant for these benchmarks, it would be a little sad to just remove benchmarks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right... It makes no sense to have a "benchmark" which does not give numbers. Now I understand what you mean, I can try to add the run-flags
directive to compiletest
.
I think that run-flags
would also be useful for other benchmarks in the directory, which at this moment do not give any numbers. Some of them are meant to be run as normal programs (such as the shootout-*
ones) so this does not apply to them, but there are others (such as core-std
) which do a Bencher-like testing. What do you think about it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Certainly! Things like core-{std,map,set}
sometimes print their own timings, but using the existing #[bench]
infrastructure would also be nice.
@alexcrichton I have tested it and the basic_benches are run. This is the output that I get when running
|
It appears that I also just remembered (sorry for not thinking of this earlier!) that all benchmarks in libstd actually have their metrics recorded and saved across all buildbot runs. We don't currently do much useful with that, but by moving these benchmarks we're not collecting this data at all, which is a bit unfortunate. |
I have removed that "trailing whitespace" that caused problems with Regarding to the recording of the metrics, I think you are right. Is there any way to record metrics of the benchmarks that aren't contained in libstd? If not, then we will have to put these benchmarks back there (hopefully in a better place than So, what should I do now? |
There isn't really a great place for these language generic benchmarks at this time. We do not have much benchmarking analysis infrastructure or prior art here. In the past benchmarks have just been added to test things here and there rather than in a rigorously structured fashion. I don't think that these benchmarks belong in |
Should I copy the benchmarks back to |
I think that's one way it could work, yeah. |
Now it is possible to specify run-flags in tests. For example, by using `run-flags: --bench` the Bencher is run.
Good, I have opened another pull request (#13643) for the changes in |
compiletest
Now it is possible to specify run-flags in tests. For example, by using `run-flags: --bench` the Bencher is run.
Now it is possible to specify run-flags in tests. For example, by using `run-flags: --bench` the Bencher is run.
Add assist to generate trait impl's resolves rust-lang#13553 This pull request adds a `generate_trait_impl` assist, which generates trait impl's for a type. It is almost the same as the one to generate impl's and I also reduced the trigger range to only outside the `RecordFieldList`. Also moved all the tests into separate test functions. A few of the old tests seemed redundant, so I didn't port them.
…ge-category, r=llogiq Change the category of `manual_is_power_of_two` to `pedantic` Fixes rust-lang#13547. The value being checked might be a bit flag, suggesting `is_power_of_two` for it would make the code unreadable. changelog: [`manual_is_power_of_two`]: Change the category to `pedantic`
…ge-category, r=llogiq Change the category of `manual_is_power_of_two` to `pedantic` Fixes rust-lang#13547. The value being checked might be a bit flag, suggesting `is_power_of_two` for it would make the code unreadable. changelog: [`manual_is_power_of_two`]: Change the category to `pedantic`
Now it is possible to specify run-flags in tests. For example, by using
run-flags: --bench
the Bencher is run.