Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose CARGO_FEATURE_<foo> environment variables to tests #2517

Closed
wants to merge 1 commit into from

Conversation

Ryman
Copy link

@Ryman Ryman commented Mar 24, 2016

This can be useful if you need to enumerate the active features within an integration test which spawns builds.

@rust-highfive
Copy link

Thanks for the pull request, and welcome! The Rust team is excited to review your changes, and you should hear from @huonw (or someone else) soon.

If any changes to this PR are deemed necessary, please add them as extra commits. This ensures that the reviewer can see what has changed since they last reviewed the code. Due to the way GitHub handles out-of-date commits, this should also make it reasonably obvious what issues have or haven't been addressed. Large or tricky changes may require several passes of review and changes.

Please see the contribution instructions for more information.

@Ryman
Copy link
Author

Ryman commented Mar 24, 2016

r? @alexcrichton I'm unsure if this has been intentionally unimplemented!

@rust-highfive rust-highfive assigned alexcrichton and unassigned huonw Mar 24, 2016
@alexcrichton
Copy link
Member

Isn't this possible via routes like:

pub fn foo_activated() -> bool { cfg!(feature = "foo") }

@Ryman
Copy link
Author

Ryman commented Mar 29, 2016

It is possible, but it can be error prone as cfg!(feature = "foo") doesn't check for the feature to exist afaik so it's subject to typos.

More concretely, the intended use case is to have some infrastructure to test examples within a project, shelling out to cargo for the builds. It does it this way, so as if the user does cargo test --test foo then foo will rebuild it's dependant example and have quicker development cycles. This might actually be really misguided and I'd love to hear of better suggestions for ensuring examples perform as expected!

Currently we have something like the above in place for nickel. We currently pull the active features from an env-var which we set in CI, but that's a real footgun when testing locally. I'd like to be able to extract it out into something more general and if we were to do that it would be great if we didn't have to require the user to have boilerplate to determine active features (especially if they could make errors easily) and make it just work from within #[test] functions.

#[test]
fn foo() {
    // the Example lib knows to enumerate CARGO_FEATURE_* and add it to the sub-builds
    let process =  Example::run("bar");
    assert!(process.stdout.contains("foo"));
}

// vs

fn active_features() -> Vec<Option<&'static str>> {
     vec![
         if cfg!("baz") { Some("baz") } else { None }
     ]
}

#[test]
fn foo() {
    // would need to respecify in every test since we don't have any kind of global init for test executables
    let process =  Example::run_with_features("bar", active_features());
    assert!(process.stdout.contains("foo"));
}

Since all of the tests will have been compiled with the same set of features and all of the example builds should be replicating the active set, all of the tests will have to have the same calls, or the user would have to implement their own wrapper to DRY it up themselves.

Perhaps an alternative would be to allow examples to have test code themselves (which cargo would know about), which would keep things simpler in many ways and add a form of additional documentation to examples (what's expected output, how to test). I have seen some projects which have their own Cargo.toml per example, which could imitate this kind of thing, but it would be nice to not have to split into a lot of subprojects and rely on makefiles.

Very open to better suggestions!

@alexcrichton
Copy link
Member

Oh these variables are also set at runtime for tests! Isn't that just another footgun in a sense? I think lots of tools like to be able to run binaries directly rather than through cargo test, and this means that they wouldn't work out of the box :(.

Could Example be defined in the test harness and it could enumerate all features via cfg!(feature = "...") as well, right? I don't think we've had many requests in the past for #[test] in examples, but it seems reasonable to me! It may want to be opt-in behavior, but seems easy enough to implement.

@Ryman
Copy link
Author

Ryman commented May 9, 2016

Sorry for the delayed response, lost track of this!

I think lots of tools like to be able to run binaries directly rather than through cargo test, and this means that they wouldn't work out of the box :(.

This is something I hadn't considered at all and it probably would have been broken for a lot of cross compilation stuff. Good point!

Could Example be defined in the test harness and it could enumerate all features via cfg!(feature = "...") as well, right?

I didn't quite follow this, what is the test harness in this case? The user's test code, or the supporting test library? If it's the latter then I don't know how it can work as it wouldn't know which features to check for as it's meant to be generic upstream library code. Perhaps something like CARGO_FEATURES (a list of all active features?) could be added at compile time which it could parse?

I've opened #2667 in relation to the example testing.

@alexcrichton
Copy link
Member

Oh I just meant that in your example the Example struct could have enumerated all of the features or rather somewhere in the snipped could have enumerated all features and worked like that. I can imagine that this is a bit of a pain though.

Something like CARGO_FEATURES sounds good to me though!

@bors
Copy link
Collaborator

bors commented May 26, 2016

☔ The latest upstream changes (presumably #2743) made this pull request unmergeable. Please resolve the merge conflicts.

@alexcrichton
Copy link
Member

Closing due to inactivity, but feel free to reopen with a rebase!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants