Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make --trim_test_configuration the default #6842

Closed
programmablereya opened this issue Dec 4, 2018 · 15 comments
Closed

Make --trim_test_configuration the default #6842

programmablereya opened this issue Dec 4, 2018 · 15 comments
Assignees
Labels
P1 I'll work on this now. (Assignee required) team-Configurability platforms, toolchains, cquery, select(), config transitions type: feature request

Comments

@programmablereya
Copy link

This is pending some work on making sure that there are no legitimate reasons for test rules to depend on each other, and if needed, revising it to have more support for this case.

@programmablereya programmablereya added the team-Configurability platforms, toolchains, cquery, select(), config transitions label Dec 4, 2018
@iirina
Copy link
Contributor

iirina commented Dec 5, 2018

Please add a priority based on the priorities in the Maintainers guide.

@programmablereya programmablereya added the P3 We're not considering working on this, but happy to review a PR. (No assignee) label Dec 5, 2018
@ittaiz
Copy link
Member

ittaiz commented Dec 9, 2018

If this is on by default what will this do to a java_test target that depends on a different java_test target?
We have that scenario internally

@programmablereya
Copy link
Author

Actually, I misspoke in the initial description - the problem is when non-test rules depend on test rules. In that case, the test rule will fail to build as a dependency of the non-test rule, citing TestConfiguration being disabled.

@ittaiz
Copy link
Member

ittaiz commented Dec 14, 2018 via email

@ittaiz
Copy link
Member

ittaiz commented Dec 25, 2018

@serynth I tried to enable the configuration and failed exactly like you describe above and like the relevant test.
I now see I misunderstood your intention before.
Can I ask why does a java_library with testonly = 1 fails this scenario? I (now) realize its not a test rule according to your definition but it is part of the test "graph" by declaring testonly = 1.
What am I missing?

@programmablereya
Copy link
Author

You're not missing anything - we don't currently use testonly as a signal, that's completely correct! :) That's definitely a potential advancement we can make to help soothe some of --trim_test_configuration's weaknesses, though! I'd like to look into the impact it will have before we do that, as that means all testonly rules are untrimmed even if there's no test anywhere in their dependencies. As I understand it, the ability of testonly rules to depend on tests themselves is somewhat of a side effect of their main purpose (to prevent test code from being used in production), so it's possible that this could result in a significant increase in the number of targets from which the test configuration is not trimmed even though it could be, resulting in a larger memory footprint and more time spent reanalyzing. Even if that's the case, that may be just fine, as this is just a stopgap until full-on automatic configuration trimming can be introduced. But it's something I'd like to think carefully about!

@ittaiz
Copy link
Member

ittaiz commented Jan 3, 2019 via email

@programmablereya
Copy link
Author

Right, so. The result of analyzing a target with a particular configuration (set of build options) is a configured target. Bazel keeps every configured target it's ever created in the analysis cache until the build options change and the analysis cache is reset. If you have a sufficiently large repository, and you build enough different targets without resetting the analysis cache or shutting down the server, you'll eventually see Bazel run out of memory - that's why.

The reason the analysis cache is reset when the build options change is that if Bazel kept the old configured targets, it would be a lot easier to run out of memory very quickly. Most users probably rebuild the same targets many times in a row, rather than building huge numbers of different targets. And when the build options change, the old configured targets are not likely to appear in the new build, whereas when building another set of targets, the new targets are likely to have common dependencies and benefit from the cache.

--trim_test_configuration turns off that resetting when only test options have changed, and thus makes it possible to run out of memory while repeatedly rebuilding the same targets with different build options. But it should generally not be a problem, because tests are a very small portion of the build graph.

For example, let's say that you have targets A, B, and C. A is a test, and B and C are testonly. Your build options result in configuration 1, which is applied to A. Bazel creates a trimmed configuration without the test options, configuration 0, and applies it to B and C. After one build, Bazel's cache contains A(1), B(0), and C(0).

Then you change (say) --test_arg, and run the build again. You do this twice, with configurations 2 and 3. A needs to be reanalyzed with the new configurations, but B and C still get configuration 0 which is cached, so only one target needs to be analyzed in each build. After this, Bazel's cache contains A(1), A(2), A(3), B(0), C(0).

However, if testonly targets receive the untrimmed configuration, then B and C do need to be reanalyzed with each build (which takes more time), and we end up with 9 configured targets in the cache instead of 5 (which takes more memory). This effect would be even more exaggerated if they were a large test framework with many testonly dependencies. And B and C in this case don't benefit from receiving the untrimmed configuration, so this duplication is unnecessary.

As far as test_filter not triggering reanalysis goes: That's definitely a possibility! :) It would require some rearchitecting, however. Currently, each test rule is responsible for determining what to do with the test_filter (much like test_arg). And it goes into the test action, which (like all other actions) is generated during analysis. test_output is different, however: it affects how the result of the test action is displayed to the user. Or, in other words, test_filter affects analysis itself, while test_output affects a wrapper around execution. To make this change, test_filter would need to affect execution itself, and Bazel would need to do some special work to know whether a test action has already been run with a particular test_filter to know whether it was cached or not. I'm not convinced that making test_filter a special case gives additional value over the current form of --trim_test_configuration (or the someday form of automatic trimming), and it would require a lot more work.

@ittaiz
Copy link
Member

ittaiz commented Jan 4, 2019 via email

@gregestren gregestren added P1 I'll work on this now. (Assignee required) and removed P3 We're not considering working on this, but happy to review a PR. (No assignee) labels Feb 4, 2020
@sdtwigg
Copy link
Contributor

sdtwigg commented May 14, 2020

Work on this is on-going.

The blockers are how to handle to pre-existing cases where a non-test is depending on a test target. This generally isn't strictly necessary but locally convenient. For example, a test target is still an executable/binary and thus returns the right providers to be executed by other rules. Thankfully, all test targets are implicitly marked as testonly and thus those non-test targets must also be testonly. This generally is sufficient to have users constrain their use of this pattern.

In theory, these uses can be migrated to simply be a single test target (or a test target that depends on another test target). However, in practice, this has been laborious as I comb through our internal repository. Unfortunately, this sort of migration cannot be done mechanically as it almost always requires revising how the underlying rule and macro implementations are structured.

I am current considering a "temporary stopgap" measure that allows non-test rules to explicitly declare the test configuration must be retained even when --trim_test_configuration is on. This would remove the depot cleanup from the critical path (and pre-empt the impact of more significant blocking use cases). However, cannot guarantee this proposal will ultimately be accepted and executed.

@sdtwigg
Copy link
Contributor

sdtwigg commented May 14, 2020

PS: For completeness, there are also a much smaller handful of cases where users have select statements on non-test rules that refer to options in the test configuration. The migration strategies and mitigations are the same.

@gregestren
Copy link
Contributor

FYI @sdtwigg is intending to flip this flag for Bazel 4.1.

bazel-io pushed a commit that referenced this issue Apr 9, 2021
Referenced by #6842.

There is still a (rare) chance for breakage due to:
1. Trying to build a cc_test target with and without TestConfiguration under same build (leading to duplicate action errors)
2. A select statement that refers to a config_setting reading from TestConfiguration that is nested under a non-test rule.

RELNOTES: trim_test_configuration now defaults to on
PiperOrigin-RevId: 367695474
@gregestren
Copy link
Contributor

ebac27e did it!

@jschaf
Copy link

jschaf commented Nov 22, 2021

On Bazel 4.2, it seems that trim_test_configuration does not default to true. Here's a summary of running a go_test target after removing build --trim_test_configuration=true from .bazelrc:

$ bazelisk version
Bazelisk version: v1.7.5
Build label: 4.2.0
Build target: bazel-out/darwin-opt/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Wed Aug 18 12:54:03 2021 (1629291243)
Build timestamp: 1629291243
Build timestamp as int: 1629291243

# First run after removing build --trim_test_configuration=true from .bazelrc
$ bazelisk test //erp/server/svc/productsvc:productsvc_test
INFO: Build option --test_filter has changed, discarding analysis cache.

# Second run: add a test_filter to the test command
$ bazelisk test --test_filter=TestProductSvc_ItemTemplate/CloneItemTemplate_with_Item_and_price //erp/server/svc/productsvc:productsvc_test
INFO: Build option --test_filter has changed, discarding analysis cache.

# Third run: remove the test_filter and use explain.
$ bazelisk test --explain=file.txt --verbose_explanations //erp/server/svc/productsvc:productsvc_test
INFO: Build option --test_filter has changed, discarding analysis cache.

# Show the contents of the explain output.
cat file.txt
Build options: --experimental_ui_max_stdouterr_bytes=-1 --noincompatible_restrict_string_escapes --incompatible_strict_action_env --build_metadata='REP O_URL=https://github.com/simple-circle/simc.git' --build_metadata='TEST_GROUPS=//base,//erp/server' --build_metadata='ALLOW_ENV=PATH,HOSTNAME,ASAN_SYMB OLIZER_PATH,CC,DEBIAN_FRONTEND,GCOV,GOPATH,JAVA_HOME,LANG,LANGUAGE,LC_ALL,LD_LIBRARY_PATH,MSAN_SYMBOLIZER_PATH,TSAN_SYMBOLIZER_PATH,UBSAN_SYMBOLIZER_PA TH,BUILDBUDDY_API_KEY,REPO_USER,REPO_TOKEN,HOME,BAZELISK_SKIP_WRAPPER' --tls_client_certificate=/opt/p/simc/private/buildbuddy/buildbuddy-cert.pem --tl s_client_key=/opt/p/simc/private/buildbuddy/buildbuddy-key.pem --bes_results_url=https://app.buildbuddy.io/invocation/ --bes_backend=grpcs://cloud.buildbuddy.io --test_output=errors --explain=file.txt --verbose_explanations --watchfs
Executing action 'BazelWorkspaceStatusAction stable-status.txt': unconditional execution is requested.
Executing action 'Testing //erp/server/svc/productsvc:productsvc_test': action command has changed.

@brentleyjones
Copy link
Contributor

ebac27e is only in Bazel 5.0+

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P1 I'll work on this now. (Assignee required) team-Configurability platforms, toolchains, cquery, select(), config transitions type: feature request
Projects
None yet
Development

No branches or pull requests

7 participants