-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache merkle trees #13879
Cache merkle trees #13879
Conversation
src/main/java/com/google/devtools/build/lib/remote/RemoteExecutionService.java
Outdated
Show resolved
Hide resolved
Thanks for the PR! I like the performance boost. It touches some areas that I'm not familiar with yet so it may take me a little while to give the feedback. Can we change the switch flag to |
Adding some tests on |
@@ -136,7 +137,8 @@ | |||
@Nullable private final RemoteExecutionClient remoteExecutor; | |||
private final ImmutableSet<PathFragment> filesToDownload; | |||
@Nullable private final Path captureCorruptedOutputsDir; | |||
private final ConcurrentHashMap<Object, MerkleTree> merkleTreeCache = new ConcurrentHashMap<>(); | |||
private final Cache<Object, MerkleTree> merkleTreeCache = | |||
CacheBuilder.newBuilder().softValues().build(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We get out of memory with this patch. Is CacheBuilder.softValues()
not enough? Is it possible to do something else? CacheBuilder.maximumSize()
seems so rough as it is dependent on what is being built.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Introduced --experimental_remote_merkle_tree_cache_size
for now.
557671e
to
b67cf26
Compare
Fixed a recursion problem with A test is now available, but it looks a bit clumsy. Is that kind of test the right way to go to check if the caching is actually taking effect? |
I don't have any updated measurements and is not working actively on this. One old example of 3000 inputs cut the calculation time from 78 ms to 0.7 ms (or even less). With the fast alias hash I've tried before, I got down to 0.1 ms. Bazel has more overhead, in the range of 10 ms for my example action, so 1 ms or 0.1 ms doesn't really matter. There is a bug in the new |
It's a change of behavior for the case where the same file appears in different nested sets but with different casing on Windows. An example leading to this is the winsock2 interface library in Microsoft CLibs package which is named "um/x64/WS2_32.Lib", a file ending which cc_import currently doesn't allow. It's however fine with "um/x64/WS2_32.lib" and if the same file is also a dependency from a glob in the toolchain we end up with two objects for the same file in different nested sets with different casing. Assume we have nested set with a ".lib" version and a nested set with a ".Lib" version of the same file.
This mismatch later on confuses ActionMetadataHandler which fails with java.lang.IllegalStateException on this line for the "Lib" input: The immediate solution for us here is to let cc_import accept ".Lib" file-endings and maybe fail if targets specify the wrong casing instead (to avoid similar issues). More fine-grained dependencies such that the file only occurs once also avoids this issue. Given the solution above, I'm okay with this patch. If you want previous behavior you could modify or replace the "putAll" call but I'm unsure what the proper solution is here. There may be more scenarios leading to the above illegal state. |
@coeuvre is out of office until next week. I'll ping him again by then. Sorry for the delay! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delay. LGTM generally but would like to have some changes before merging:
- Can we move the newly added
walkInputs
method and its related interfaces/implementations to a place that is private to remote module? Reasons:SpawnExecutionContext
is also used internally. Adding methods there makes it hard to import.- This PR tries to improve merkle tree performance for remote execution. Keeping the change within remote module reduce the risk to introduce regressions for other spawn strategies.
SpawnInputExpander#getInputMapping
is also used by other spawn strategies and getting it right is crucial. Can we keep its original implementation untouched (rather than reusewalkInputs
) as starter and make sure we use the original one when the switch is off?- Can we use two flags, one boolean flag for whether enabling merkle tree cache and another one for controlling the cache size?
b67cf26
to
c63d6e8
Compare
I've rebased and resolved the merge conflicts.
Done, removed the commit Reimplement getInputMapping() using walkInputs().
Done. Default size is set to unlimited, but maybe 1000 or 10000 would be more usable. |
SGTM. In this case, we can also move Note that Please also note that the result of
We may want to override
Thanks. unlimited is fine to get started. |
As this PR should work for remote caching also, moving things into In some way |
SGTM. |
Done the implementation but some tests fail. Will check that in the afternoon. |
f69c3aa
to
f6a4f8a
Compare
help = | ||
"The number of Merkle trees to memoize to improve the remote cache hit checking speed. " | ||
+ "Even though the cache is automatically pruned according to Java's handling of " | ||
+ "soft references, out-of-memory errors can occurr if set too high. If set to 0 " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: typo
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed. Thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I will import and do the required internal changes.
return inputMap; | ||
public SortedMap<PathFragment, ActionInput> getInputMap() | ||
throws IOException, ForbiddenActionInputException { | ||
if (lazyInputMap == null) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's ok to just return remotePathResolver.getInputMapping(spawnExecutionContext)
since SpawnExecutionContext.getInputMapping
is supposed to cache the value internally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's right. I've pushed a fixup commit.
|
||
@Option( | ||
name = "experimental_remote_merkle_tree_cache_size", | ||
defaultValue = "1000", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
defaultValue
is 1000 but description says 0. Let's change to 0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well spotted! Fixup commit available.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! You probably need to rebase to fix CI errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rebased and all tests now pass. Saved all old versions on my own fork, if of any interest.
This change allows for future lookup tables for caching with the RunfilesSupplier as key.
getInputMapping() is slow, it can take several milliseconds, so do not call it unless needed.
8e2db25
to
e72c079
Compare
Imported and passed all internal tests. Sent out for review, should be merged soon. |
Would be nice to get this into 5.0. |
When --experimental_remote_merkle_tree_cache is set, Merkle tree calculations are cached for each node in the input NestedSets (depsets). This drastically improves the speed when checking for remote cache hits. One example reduced the Merkle tree calculation time from 78 ms to 3 ms for 3000 inputs. The memory foot print of the cache is controlled by --experimental_remote_merkle_tree_cache_size. The caching is discarded after each build to free up memory, the cache setup time is negligible. Fixes bazelbuild#10875. Closes bazelbuild#13879. PiperOrigin-RevId: 405793372
When --experimental_remote_merkle_tree_cache is set, Merkle tree calculations are cached for each node in the input NestedSets (depsets). This drastically improves the speed when checking for remote cache hits. One example reduced the Merkle tree calculation time from 78 ms to 3 ms for 3000 inputs. The memory foot print of the cache is controlled by --experimental_remote_merkle_tree_cache_size. The caching is discarded after each build to free up memory, the cache setup time is negligible. Fixes #10875. Closes #13879. PiperOrigin-RevId: 405793372 Co-authored-by: Fredrik Medley <[email protected]>
Thank you @coeuvre for the review and cherry-picking it into 5.0 👍 |
A warning for whoever wanting to enable this flag in their build: do NOT use the default value of It essentially an in-memory cache, constructed using https://github.com/ben-manes/caffeine/, to help speed up input resolver before using remote cache or remote exec. The flag being in remote execution section of Bazel 5.0 blog was giving me a false impression that this cache things on the remote_cache instead of Bazel JVM memory. And because this is in-memory and not remotely cached, you would definitely want to cap it at a certain size using Also see https://github.com/ben-manes/caffeine/wiki/Eviction#reference-based for more details on how
I would suggest setting the default cache size to 1000-3000 by default just to give folks a bit of a sane value to start with. And perhaps updating the doc that optimal cache size value needs to be calibrated per-workspace. @moroten wdyt? |
It was mentioned above as well, but I just set it unlimited. I think defaulting to anything between 1000 and 10000 seems perfectly reasonable, together with adding "optimal cache size value needs to be calibrated per-workspace" to the docs. I'm not working on those projects any more where I can test out the effect of different cache sizes, so I can't help with any specific number. @sluongng feel free to create a PR, unless anyone else have opinions. |
I created #14959 |
Summary: This diff updates our Bazel version to 5.0.0. I've re-worked our patches on top of the upstream 5.0.0 tag. - `linux-sandbox` has changed a bunch (mostly in `linux-sandbox-pid1.cc`) so I had to rewrite a bunch of the patch. The main change is that upstream has added a bunch of logic from our own patch in order to support the new hermetic sandbox flag (see bazelbuild/bazel#13279). So I've cleaned things up, removed some of our code and instead called their new code. One change is that they hardlink an empty file outside of the sandbox rather than creating new files, which sounds ok. Note that we might be able to remove even more of our own patch in favor of their hermetic support but we can do that later. - The merkle tree computation moved from `RemoteSpawnCache` to the `RemoteExecutionService` We should be able to rewrite the patch fairly easily but they've also added an (in-process) cache for those trees (see bazelbuild/bazel#13879) so it might be helping with the slowness that we were seeing before. I'm inclined to not apply the patch to start with and we can add it back if things get much slower. The changes are on the `dbx-2022-02-25-5.0.0` branch in the bazel repo. Here's the list of our own commits on top of upstream: - [[ https://sourcegraph.pp.dropbox.com/bazel/-/commit/5a121a34b1a2a39530bf6cecc3892fc4509a1735?visible=2 | DBX: Helper scripts to build dbx bazel ]] - [[ https://sourcegraph.pp.dropbox.com/bazel/-/commit/c3707dea392806b81f2892d46ede5bf54ef02527?visible=1 | DBX: Point remotejdk URLs to magic mirror ]] - [[ https://sourcegraph.pp.dropbox.com/bazel/-/commit/dc5a85b9a1b710230f2c786fd2cede3adb29370d?visible=2 | DBX: Make sure that the java8 toolchain uses the right options ]] - [[ https://sourcegraph.pp.dropbox.com/bazel/-/commit/497532f9878b3b68582c12766bf034a4de6cc44a?visible=6 | DBX: rootfs patch for the linux-sandbox ]] Also see https://blog.bazel.build/2022/01/19/bazel-5.0.html DTOOLS-1748 Test Plan: Will run the main projects and CI and make sure that things still work. Ran `bzl tool //dropbox/devtools/bazel_metrics/benchmarks --target //services/metaserver edit-refresh` on both this diff and master. On 4.1.0 on master: ``` Running no-op reload 5 times... Finished running no-op reload! The results were: min: 3.01s avg: 3.08s p50: 3.08s max: 3.21s Running modify metaserver/static/js/modules/core/uri.ts 5 times... Finished running modify metaserver/static/js/modules/core/uri.ts! The results were: min: 5.30s avg: 5.78s p50: 5.77s max: 6.59s Running modify metaserver/static/css/legacy_browse.scss 5 times... Finished running modify metaserver/static/css/legacy_browse.scss! The results were: min: 4.46s avg: 4.83s p50: 4.69s max: 5.26s Running add file at metaserver/static/js/modules/core/devbox-benchmark-file-{}.ts 5 times... Finished running add file at metaserver/static/js/modules/core/devbox-benchmark-file-{}.ts! The results were: min: 25.69s avg: 26.21s p50: 26.22s max: 26.89s Running modify metaserver/static/error/maintenance.html 5 times... Finished running modify metaserver/static/error/maintenance.html! The results were: min: 4.75s avg: 4.85s p50: 4.75s max: 5.01s ``` On 5.0.0 ``` Running no-op reload 5 times... Finished running no-op reload! The results were: min: 3.48s avg: 3.69s p50: 3.48s max: 3.90s Running modify metaserver/static/js/modules/core/uri.ts 5 times... Finished running modify metaserver/static/js/modules/core/uri.ts! The results were: min: 5.54s avg: 6.34s p50: 5.54s max: 8.59s Running modify metaserver/static/css/legacy_browse.scss 5 times... Finished running modify metaserver/static/css/legacy_browse.scss! The results were: min: 4.34s avg: 4.75s p50: 5.05s max: 5.46s Running add file at metaserver/static/js/modules/core/devbox-benchmark-file-{}.ts 5 times... Finished running add file at metaserver/static/js/modules/core/devbox-benchmark-file-{}.ts! The results were: min: 25.55s avg: 25.96s p50: 25.64s max: 26.71s Running modify metaserver/static/error/maintenance.html 5 times... Finished running modify metaserver/static/error/maintenance.html! The results were: min: 4.79s avg: 5.33s p50: 5.15s max: 5.84s ``` GitOrigin-RevId: 0f466c5a3bde9ed1157ea936bb70826b58f2fbec
@coeuvre is there a plan, or evaluation you're looking for, to turn this on by default? If anyone on this bug has data on the memory / latency tradeoff of turning on this cache for their own builds, especially at the default size (1000 from #14959), that might be useful. (I ask only as an interested observer - if the cache is as useful as it sounds like, i.e. the experiment is considered a success, I'd like everyone to get the benefit of it by default and not have to know they need to manually opt in.) |
I think a more mature way to approach this is:
If we have either one of the above, I think it would be a lot easier to transition this into non-experimental. |
I am not confident enough to turn this on by default:
I agree with what @sluongng said but I don't have the workload to work on the improvement. |
I just tried this experimental setting out on a typescript heavy codebase and it resulted in increasing our fully-remote-cached test run time from 1.5 minutes to near 10 minutes. |
That sounds really bad. I've mostly been looking at the output from |
FWIW, I tried bumping the cache size up to 4000 but saw no improvement. |
The Remote Execution API meeting notes 2022-10-11 reads:
It seems like it is not working as intended. I haven't looked into it for a long time so I don't know if something in Bazel have changed in an unfortunate way for this patch. |
MerkleTree calculations are now cached for each node in the input NestedSets (depsets). This drastically improves the speed when checking for remote cache hits. One example reduced the Merkle tree calculation time from 78 ms to 3 ms for 3000 inputs.
This caching can be disabled using --remote_cache_merkle_trees=false which will reduce the memory footprint. The caching is discarded after each build to free up memory, the cache setup time is negligible.
Fixes #10875.