Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added run_flags directive to compiletest #13553

Merged
merged 1 commit into from
Apr 21, 2014
Merged

Added run_flags directive to compiletest #13553

merged 1 commit into from
Apr 21, 2014

Conversation

aochagavia
Copy link
Contributor

Now it is possible to specify run-flags in tests. For example, by using run-flags: --bench the Bencher is run.

// option. This file may not be copied, modified, or distributed
// except according to those terms.

/// Completely miscellaneous language-construct benchmarks.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that the tests in this directory have --test passed when compiled or --bench passed when run, can you verify that these tests are run when you run this test?

You can use something like make check-stage1-bench TESTNAME=basic to run this test.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't work... I even get an error:

running 1 test
test [run-pass] bench/basic_benches.rs ... FAILED

using metrics ratchet: tmp/check-stage1-T-x86_64-unknown-linux-gnu-H-x86_64-unknown-linux-gnu-bench-metrics.json
result of ratchet: 0 metrics added, 0 removed, 0 improved, 0 regressed, 0 noise
updated ratchet file

failures:

---- [run-pass] bench/basic_benches.rs stdout ----

    error: compilation failed!
    command: x86_64-unknown-linux-gnu/stage1/bin/rustc /home/aochagavia/Desktop/rust/src/test/bench/basic_benches.rs -L x86_64-unknown-linux-gnu/test/bench --target=x86_64-unknown-linux-gnu -L x86_64-unknown-linux-gnu/test/bench/basic_benches.stage1-x86_64-unknown-linux-gnu.libaux -C prefer-dynamic -o x86_64-unknown-linux-gnu/test/bench/basic_benches.stage1-x86_64-unknown-linux-gnu --cfg rtopt --cfg debug -O -L x86_64-unknown-linux-gnu/rt
    stdout:
    ------------------------------------------

    ------------------------------------------
    stderr:
    ------------------------------------------
    error: main function not found
    error: aborting due to previous error

    ------------------------------------------

    task '[run-pass] bench/basic_benches.rs' failed at 'explicit failure', /home/aochagavia/Desktop/rust/src/compiletest/runtest.rs:963


failures:
    [run-pass] bench/basic_benches.rs

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured

Would this be fixed if I included a main function?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have included a main function that does nothing (fn main() { }) and now it says that the test "passed". However, it looks like it is not benchmarking anything. I have looked to the other benchmarks in the directory and none of them uses the Bencher. If you take a look at (for example) std-core.rs, you will be able to see what I mean.

Do you have any idea on how to fix this?

Current output after adding an empty main function:

running 1 test
test [run-pass] bench/basic_benches.rs ... ok

using metrics ratchet: tmp/check-stage1-T-x86_64-unknown-linux-gnu-H-x86_64-unknown-linux-gnu-bench-metrics.json
result of ratchet: 0 metrics added, 0 removed, 0 improved, 0 regressed, 0 noise
updated ratchet file

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently you can specify flags to be passed to the compiler (you would need to pass --test), but there is no way to specify flags to the invocation of the binary (you would need to pass --bench).

To move these tests, I think a new run-flags directive will need to be added to src/compiletest in order to pass --bench to the binary itself.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand what you mean by run-flags. I couldn't find any reference to it in the manual or in the code self. However, I have taken a look at src/compiletest and, if I understand correctly, the benchmarks are run as if they were tests (maybe I am wrong, the code has almost comments so I was on my own).

Any ideas on how to go further? I am stuck...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently there are compile-flags directives at the top of tests which instruct compiletest to pass certain flags to the compiler. You'll need to add a run-flags directive which dictates flags to pass to the binary which gets run.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand what you mean by directive. I expect to find something like #[compile-flags(something)], but I don't.

I have made some changes to make it look like the other benches in the directory. Then I tried using make check-stage1-bench. This time the benchmark was run.

I have amended the commit and pushed again, so you can see the changes. Please tell me if I am moving in the right direction.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By directive I mean the top of the test file which looks like:

// compile-flags: --test
// aux-build:foo.rs

(there are many other directives). You've made it such that this code is run, but it is no longer run in the benchmark harness so we're no longer getting any numbers relevant for these benchmarks, it would be a little sad to just remove benchmarks.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right... It makes no sense to have a "benchmark" which does not give numbers. Now I understand what you mean, I can try to add the run-flags directive to compiletest.

I think that run-flags would also be useful for other benchmarks in the directory, which at this moment do not give any numbers. Some of them are meant to be run as normal programs (such as the shootout-* ones) so this does not apply to them, but there are others (such as core-std) which do a Bencher-like testing. What do you think about it?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Certainly! Things like core-{std,map,set} sometimes print their own timings, but using the existing #[bench] infrastructure would also be nice.

@aochagavia
Copy link
Contributor Author

@alexcrichton I have tested it and the basic_benches are run. This is the output that I get when running make check-stage1-bench (you can see that basic_benches.rs passed):

running 38 tests
test [run-pass] bench/core-std.rs ... ok
test [run-pass] bench/core-set.rs ... ok
test [run-pass] bench/core-uint-to-str.rs ... ok
test [run-pass] bench/msgsend-pipes-shared.rs ... ok
test [run-pass] bench/msgsend-pipes.rs ... ok
test [run-pass] bench/msgsend-ring-mutex-arcs.rs ... ok
test [run-pass] bench/msgsend-ring-rw-arcs.rs ... ok
test [run-pass] bench/noise.rs ... ok
test [run-pass] bench/core-map.rs ... ok
test [run-pass] bench/basic_benches.rs ... ok
test [run-pass] bench/rt-spawn-rate.rs ... ok
test [run-pass] bench/shootout-ackermann.rs ... ok
test [run-pass] bench/rt-messaging-ping-pong.rs ... ok
test [run-pass] bench/rt-parfib.rs ... ok
test [run-pass] bench/shootout-fannkuch-redux.rs ... ok
test [run-pass] bench/shootout-fasta-redux.rs ... ok
test [run-pass] bench/shootout-fibo.rs ... ok
test [run-pass] bench/shootout-fasta.rs ... ok
test [run-pass] bench/shootout-binarytrees.rs ... ok
test [run-pass] bench/shootout-mandelbrot.rs ... ok
test [run-pass] bench/shootout-meteor.rs ... ok
test [run-pass] bench/shootout-chameneos-redux.rs ... ok
test [run-pass] bench/shootout-nbody.rs ... ok
test [run-pass] bench/shootout-pidigits.rs ... ok
test [run-pass] bench/shootout-k-nucleotide.rs ... ok
test [run-pass] bench/shootout-reverse-complement.rs ... ok
test [run-pass] bench/shootout-k-nucleotide-pipes.rs ... ok
test [run-pass] bench/silly-test-spawn.rs ... ok
test [run-pass] bench/spawnone.rs ... ok
test [run-pass] bench/shootout-pfib.rs ... ok
test [run-pass] bench/std-smallintmap.rs ... ok
test [run-pass] bench/shootout-threadring.rs ... ok
test [run-pass] bench/sudoku.rs ... ok
test [run-pass] bench/task-perf-one-million.rs ... ignored
test [run-pass] bench/task-perf-jargon-metal-smoke.rs ... ok
test [run-pass] bench/task-perf-alloc-unwind.rs ... ok
test [run-pass] bench/shootout-spectralnorm.rs ... ok
test [run-pass] bench/task-perf-spawnalot.rs ... ok

using metrics ratchet: tmp/check-stage1-T-x86_64-unknown-linux-gnu-H-x86_64-unknown-linux-gnu-bench-metrics.json
result of ratchet: 0 metrics added, 0 removed, 0 improved, 0 regressed, 0 noise
updated ratchet file

test result: ok. 37 passed; 0 failed; 1 ignored; 0 measured

@alexcrichton
Copy link
Member

It appears that make tidy is failing on these commits (as seen from travis).

I also just remembered (sorry for not thinking of this earlier!) that all benchmarks in libstd actually have their metrics recorded and saved across all buildbot runs. We don't currently do much useful with that, but by moving these benchmarks we're not collecting this data at all, which is a bit unfortunate.

@aochagavia
Copy link
Contributor Author

I have removed that "trailing whitespace" that caused problems with make tidy. I hope this fixes the problem.

Regarding to the recording of the metrics, I think you are right. Is there any way to record metrics of the benchmarks that aren't contained in libstd? If not, then we will have to put these benchmarks back there (hopefully in a better place than std::mem). Even in such a situation, we have now a run-flags directive which I guess will be useful in the future.

So, what should I do now?

@alexcrichton
Copy link
Member

There isn't really a great place for these language generic benchmarks at this time. We do not have much benchmarking analysis infrastructure or prior art here. In the past benchmarks have just been added to test things here and there rather than in a rigorously structured fashion.

I don't think that these benchmarks belong in src/test/bench (due to lack of any analysis), but rather somewhere in some crate (possibly libstd, possibly not). I think the run-flags directive is fine to keep for now (it's certainly useful for other crates).

@aochagavia
Copy link
Contributor Author

Should I copy the benchmarks back to mem with a FIXME?

@alexcrichton
Copy link
Member

I think that's one way it could work, yeah.

Now it is possible to specify run-flags in tests. For example, by using `run-flags: --bench` the Bencher is run.
@aochagavia
Copy link
Contributor Author

Good, I have opened another pull request (#13643) for the changes in mem because I think they are unrelated to run-flags.

@aochagavia aochagavia changed the title Moved old miscellaneous benchmarks from std::mem Added run_flags directive to compiletest Apr 20, 2014
@aochagavia
Copy link
Contributor Author

@alexcrichton r?

bors added a commit that referenced this pull request Apr 21, 2014
Now it is possible to specify run-flags in tests. For example, by using `run-flags: --bench` the Bencher is run.
bors added a commit that referenced this pull request Apr 21, 2014
Now it is possible to specify run-flags in tests. For example, by using `run-flags: --bench` the Bencher is run.
@bors bors closed this Apr 21, 2014
@bors bors merged commit 49ff21e into rust-lang:master Apr 21, 2014
@aochagavia aochagavia deleted the pr branch April 21, 2014 19:02
bors added a commit to rust-lang-ci/rust that referenced this pull request Jan 9, 2023
Add assist to generate trait impl's

resolves rust-lang#13553

This pull request adds a `generate_trait_impl` assist, which generates trait impl's for a type. It is almost the same as the one to generate impl's and I also reduced the trigger range to only outside the `RecordFieldList`. Also moved all the tests into separate test functions. A few of the old tests seemed redundant, so I didn't port them.
flip1995 pushed a commit to flip1995/rust that referenced this pull request Oct 18, 2024
…ge-category, r=llogiq

Change the category of `manual_is_power_of_two` to `pedantic`

Fixes rust-lang#13547.

The value being checked might be a bit flag, suggesting `is_power_of_two` for it would make the code unreadable.

changelog: [`manual_is_power_of_two`]: Change the category to `pedantic`
flip1995 pushed a commit to flip1995/rust that referenced this pull request Nov 21, 2024
…ge-category, r=llogiq

Change the category of `manual_is_power_of_two` to `pedantic`

Fixes rust-lang#13547.

The value being checked might be a bit flag, suggesting `is_power_of_two` for it would make the code unreadable.

changelog: [`manual_is_power_of_two`]: Change the category to `pedantic`
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants