Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache merkle trees #13879

Closed
wants to merge 5 commits into from
Closed

Conversation

moroten
Copy link
Contributor

@moroten moroten commented Aug 19, 2021

MerkleTree calculations are now cached for each node in the input NestedSets (depsets). This drastically improves the speed when checking for remote cache hits. One example reduced the Merkle tree calculation time from 78 ms to 3 ms for 3000 inputs.

This caching can be disabled using --remote_cache_merkle_trees=false which will reduce the memory footprint. The caching is discarded after each build to free up memory, the cache setup time is negligible.

Fixes #10875.

@coeuvre
Copy link
Member

coeuvre commented Aug 20, 2021

Thanks for the PR! I like the performance boost.

It touches some areas that I'm not familiar with yet so it may take me a little while to give the feedback.

Can we change the switch flag to --experimental_remote_cache_merkle_trees and make it false by default?

@coeuvre coeuvre self-requested a review August 20, 2021 03:54
@coeuvre coeuvre self-assigned this Aug 20, 2021
@coeuvre coeuvre added team-Remote-Exec Issues and PRs for the Execution (Remote) team type: feature request P1 I'll work on this now. (Assignee required) labels Aug 20, 2021
@moroten
Copy link
Contributor Author

moroten commented Aug 25, 2021

Adding some tests on buildRemoteAction(), that memoization is actually happening, seems reasonable to add. Any other tests that should be added as well? Is it over the top to write tests on RunfilesSupplier.equals() or is that expected?

@@ -136,7 +137,8 @@
@Nullable private final RemoteExecutionClient remoteExecutor;
private final ImmutableSet<PathFragment> filesToDownload;
@Nullable private final Path captureCorruptedOutputsDir;
private final ConcurrentHashMap<Object, MerkleTree> merkleTreeCache = new ConcurrentHashMap<>();
private final Cache<Object, MerkleTree> merkleTreeCache =
CacheBuilder.newBuilder().softValues().build();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We get out of memory with this patch. Is CacheBuilder.softValues() not enough? Is it possible to do something else? CacheBuilder.maximumSize() seems so rough as it is dependent on what is being built.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Introduced --experimental_remote_merkle_tree_cache_size for now.

@moroten
Copy link
Contributor Author

moroten commented Aug 28, 2021

Fixed a recursion problem with inputBytes and inputFiles which made the previous attempt too slow. Also, introduced an explicit parameter for the cache size due to out of memory problems. Try values in the range 1000-10000.

A test is now available, but it looks a bit clumsy. Is that kind of test the right way to go to check if the caching is actually taking effect?

@moroten
Copy link
Contributor Author

moroten commented Sep 14, 2021

I don't have any updated measurements and is not working actively on this. One old example of 3000 inputs cut the calculation time from 78 ms to 0.7 ms (or even less). With the fast alias hash I've tried before, I got down to 0.1 ms. Bazel has more overhead, in the range of 10 ms for my example action, so 1 ms or 0.1 ms doesn't really matter.

There is a bug in the new getInputMapping() that I've heard about. More details and/or a fix will come as soon as I know more.

@torgil
Copy link
Contributor

torgil commented Sep 16, 2021

There is a bug in the new getInputMapping() that I've heard about. More details and/or a fix will come as soon as I know more.

It's a change of behavior for the case where the same file appears in different nested sets but with different casing on Windows.

An example leading to this is the winsock2 interface library in Microsoft CLibs package which is named "um/x64/WS2_32.Lib", a file ending which cc_import currently doesn't allow. It's however fine with "um/x64/WS2_32.lib" and if the same file is also a dependency from a glob in the toolchain we end up with two objects for the same file in different nested sets with different casing.

Assume we have nested set with a ".lib" version and a nested set with a ".Lib" version of the same file.

old getInputMapping():
"expandArtifacts" is called which loops over inputs.toList() which ignores the file ending with .Lib
resulting inputMap entry: key= ".lib" PathFragment, value = ".lib" ActionInput

new getInputMappipng():
inputMap.putAll is called for each set which overrides the value but not the key
resulting inputMap entry: key= ".lib" PathFragment, value = ".Lib" ActionInput

This mismatch later on confuses ActionMetadataHandler which fails with java.lang.IllegalStateException on this line for the "Lib" input:
checkState(forInputDiscovery, "%s is not present in declared outputs: %s", artifact, outputs);

The immediate solution for us here is to let cc_import accept ".Lib" file-endings and maybe fail if targets specify the wrong casing instead (to avoid similar issues). More fine-grained dependencies such that the file only occurs once also avoids this issue.

Given the solution above, I'm okay with this patch. If you want previous behavior you could modify or replace the "putAll" call but I'm unsure what the proper solution is here. There may be more scenarios leading to the above illegal state.

@Wyverald
Copy link
Member

Wyverald commented Oct 4, 2021

@coeuvre is out of office until next week. I'll ping him again by then. Sorry for the delay!

Copy link
Member

@coeuvre coeuvre left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay. LGTM generally but would like to have some changes before merging:

  1. Can we move the newly added walkInputs method and its related interfaces/implementations to a place that is private to remote module? Reasons:
    • SpawnExecutionContext is also used internally. Adding methods there makes it hard to import.
    • This PR tries to improve merkle tree performance for remote execution. Keeping the change within remote module reduce the risk to introduce regressions for other spawn strategies.
  2. SpawnInputExpander#getInputMapping is also used by other spawn strategies and getting it right is crucial. Can we keep its original implementation untouched (rather than reuse walkInputs) as starter and make sure we use the original one when the switch is off?
  3. Can we use two flags, one boolean flag for whether enabling merkle tree cache and another one for controlling the cache size?

@moroten moroten requested a review from a team as a code owner October 17, 2021 21:17
@moroten
Copy link
Contributor Author

moroten commented Oct 17, 2021

I've rebased and resolved the merge conflicts.

Sorry for the delay. LGTM generally but would like to have some changes before merging:

  1. Can we move the newly added walkInputs method and its related interfaces/implementations to a place that is private to remote module? Reasons:

    • SpawnExecutionContext is also used internally. Adding methods there makes it hard to import.
    • This PR tries to improve merkle tree performance for remote execution. Keeping the change within remote module reduce the risk to introduce regressions for other spawn strategies.

SpawnInputExpander needs to implement walkInputs() and the only reference is from the private member AbstractSpawnStrategy.spawnInputExpander. Is it okay to make spawnInputExpander and SpawnExecutionContextImpl "protected" so they can be overridden and extended by RemoteSpawnStrategy? Then RemoteExecutionService.buildInputMerkleTree() can check if context is of a special RemoteSpawnExecutionContext (or a better name) which then should support walkInputs(). Does anyone know another better way of routing?

  1. SpawnInputExpander#getInputMapping is also used by other spawn strategies and getting it right is crucial. Can we keep its original implementation untouched (rather than reuse walkInputs) as starter and make sure we use the original one when the switch is off?

Done, removed the commit Reimplement getInputMapping() using walkInputs(). SpawnInputExpander does not have access to RemoteOptions, so it would be a bit of plumbing to switch getInputMapping() depending on the flag. When point 1 gets sorted, it might be possible to switch on the flag again.

  1. Can we use two flags, one boolean flag for whether enabling merkle tree cache and another one for controlling the cache size?

Done. Default size is set to unlimited, but maybe 1000 or 10000 would be more usable.

@coeuvre
Copy link
Member

coeuvre commented Oct 19, 2021

SpawnInputExpander needs to implement walkInputs() and the only reference is from the private member AbstractSpawnStrategy.spawnInputExpander. Is it okay to make spawnInputExpander and SpawnExecutionContextImpl "protected" so they can be overridden and extended by RemoteSpawnStrategy? Then RemoteExecutionService.buildInputMerkleTree() can check if context is of a special RemoteSpawnExecutionContext (or a better name) which then should support walkInputs(). Does anyone know another better way of routing?

SGTM. In this case, we can also move walkInputs() implementation from SpawnInputExpander to RemoteSpawnExecutionContext.

Note that RemoteSpawnStrategy is only used when remote execution is enabled. We may also want to enable merkle tree cache for remote cache only build. (See the call sites of RemoteExecutionService.buildRemoteAction)

Please also note that the result of getInputMapping() is cached by SpawnExecutionContextImpl for following use cases:

  1. Prefetch the inputs for local execution.
  2. After execution, log the inputs of the spawn.

We may want to override getInputMapping() in RemoteSpawnExecutionContext to reuse the result of walkInputs().

Done. Default size is set to unlimited, but maybe 1000 or 10000 would be more usable.

Thanks. unlimited is fine to get started.

@moroten
Copy link
Contributor Author

moroten commented Oct 19, 2021

SpawnInputExpander needs to implement walkInputs() and the only reference is from the private member AbstractSpawnStrategy.spawnInputExpander. Is it okay to make spawnInputExpander and SpawnExecutionContextImpl "protected" so they can be overridden and extended by RemoteSpawnStrategy? Then RemoteExecutionService.buildInputMerkleTree() can check if context is of a special RemoteSpawnExecutionContext (or a better name) which then should support walkInputs(). Does anyone know another better way of routing?

SGTM. In this case, we can also move walkInputs() implementation from SpawnInputExpander to RemoteSpawnExecutionContext.

Note that RemoteSpawnStrategy is only used when remote execution is enabled. We may also want to enable merkle tree cache for remote cache only build. (See the call sites of RemoteExecutionService.buildRemoteAction)

As this PR should work for remote caching also, moving things into RemoteSpawnStrategy will not work. (Thanks for pointing it out!)

In some way spawnInputExpander needs to be accessed. What about adding SpawnExecutionContext.getSpawnInputExpander()? Then RemoteExecutionService would call context.getSpawnInputExpander().walkInputs(). Instead of polluting the interfaces with walkInputs(), the spawnInputExpander will be exposed. Is that a good enough solution?

@coeuvre
Copy link
Member

coeuvre commented Oct 20, 2021

In some way spawnInputExpander needs to be accessed. What about adding SpawnExecutionContext.getSpawnInputExpander()? Then RemoteExecutionService would call context.getSpawnInputExpander().walkInputs(). Instead of polluting the interfaces with walkInputs(), the spawnInputExpander will be exposed. Is that a good enough solution?

SGTM.

@moroten
Copy link
Contributor Author

moroten commented Oct 20, 2021

In some way spawnInputExpander needs to be accessed. What about adding SpawnExecutionContext.getSpawnInputExpander()? Then RemoteExecutionService would call context.getSpawnInputExpander().walkInputs(). Instead of polluting the interfaces with walkInputs(), the spawnInputExpander will be exposed. Is that a good enough solution?

SGTM.

Done the implementation but some tests fail. Will check that in the afternoon.

@moroten moroten force-pushed the cache-merkle-trees branch 3 times, most recently from f69c3aa to f6a4f8a Compare October 20, 2021 13:30
help =
"The number of Merkle trees to memoize to improve the remote cache hit checking speed. "
+ "Even though the cache is automatically pruned according to Java's handling of "
+ "soft references, out-of-memory errors can occurr if set too high. If set to 0 "
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: typo

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed. Thank you!

Copy link
Member

@coeuvre coeuvre left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. I will import and do the required internal changes.

return inputMap;
public SortedMap<PathFragment, ActionInput> getInputMap()
throws IOException, ForbiddenActionInputException {
if (lazyInputMap == null) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's ok to just return remotePathResolver.getInputMapping(spawnExecutionContext) since SpawnExecutionContext.getInputMapping is supposed to cache the value internally.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's right. I've pushed a fixup commit.


@Option(
name = "experimental_remote_merkle_tree_cache_size",
defaultValue = "1000",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

defaultValue is 1000 but description says 0. Let's change to 0.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well spotted! Fixup commit available.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! You probably need to rebase to fix CI errors.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rebased and all tests now pass. Saved all old versions on my own fork, if of any interest.

This change allows for future lookup tables for caching with the
RunfilesSupplier as key.
getInputMapping() is slow, it can take several milliseconds, so do not
call it unless needed.
@coeuvre
Copy link
Member

coeuvre commented Oct 26, 2021

Imported and passed all internal tests. Sent out for review, should be merged soon.

@bazel-io bazel-io closed this in becd149 Oct 27, 2021
@brentleyjones
Copy link
Contributor

Would be nice to get this into 5.0.

coeuvre pushed a commit to coeuvre/bazel that referenced this pull request Oct 28, 2021
When --experimental_remote_merkle_tree_cache is set, Merkle tree calculations are cached for each node in the input NestedSets (depsets). This drastically improves the speed when checking for remote cache hits. One example reduced the Merkle tree calculation time from 78 ms to 3 ms for 3000 inputs.

The memory foot print of the cache is controlled by --experimental_remote_merkle_tree_cache_size.

The caching is discarded after each build to free up memory, the cache setup time is negligible.

Fixes bazelbuild#10875.

Closes bazelbuild#13879.

PiperOrigin-RevId: 405793372
Wyverald pushed a commit that referenced this pull request Oct 28, 2021
When --experimental_remote_merkle_tree_cache is set, Merkle tree calculations are cached for each node in the input NestedSets (depsets). This drastically improves the speed when checking for remote cache hits. One example reduced the Merkle tree calculation time from 78 ms to 3 ms for 3000 inputs.

The memory foot print of the cache is controlled by --experimental_remote_merkle_tree_cache_size.

The caching is discarded after each build to free up memory, the cache setup time is negligible.

Fixes #10875.

Closes #13879.

PiperOrigin-RevId: 405793372

Co-authored-by: Fredrik Medley <[email protected]>
@moroten
Copy link
Contributor Author

moroten commented Oct 28, 2021

Thank you @coeuvre for the review and cherry-picking it into 5.0 👍

@sluongng
Copy link
Contributor

sluongng commented Mar 4, 2022

A warning for whoever wanting to enable this flag in their build: do NOT use the default value of experimental_remote_merkle_tree_cache_size at zero.

It essentially an in-memory cache, constructed using https://github.com/ben-manes/caffeine/, to help speed up input resolver before using remote cache or remote exec. The flag being in remote execution section of Bazel 5.0 blog was giving me a false impression that this cache things on the remote_cache instead of Bazel JVM memory.

And because this is in-memory and not remotely cached, you would definitely want to cap it at a certain size using experimental_remote_merkle_tree_cache_size to avoid OOM killing Bazel.

Also see https://github.com/ben-manes/caffeine/wiki/Eviction#reference-based for more details on how softValues Caffeine cache work:

Because of the performance implications of using soft references, we generally recommend using the more predictable maximum cache size instead.

I would suggest setting the default cache size to 1000-3000 by default just to give folks a bit of a sane value to start with. And perhaps updating the doc that optimal cache size value needs to be calibrated per-workspace.

@moroten wdyt?

@moroten
Copy link
Contributor Author

moroten commented Mar 4, 2022

I would suggest setting the default cache size to 1000-3000 by default just to give folks a bit of a sane value to start with. And perhaps updating the doc that optimal cache size value needs to be calibrated per-workspace.

It was mentioned above as well, but I just set it unlimited. I think defaulting to anything between 1000 and 10000 seems perfectly reasonable, together with adding "optimal cache size value needs to be calibrated per-workspace" to the docs.

I'm not working on those projects any more where I can test out the effect of different cache sizes, so I can't help with any specific number.

@sluongng feel free to create a PR, unless anyone else have opinions.

@sluongng
Copy link
Contributor

sluongng commented Mar 4, 2022

I created #14959

@sluongng sluongng mentioned this pull request Mar 4, 2022
6 tasks
jhance pushed a commit to dropbox/dbx_build_tools that referenced this pull request Mar 23, 2022
Summary:
This diff updates our Bazel version to 5.0.0.

I've re-worked our patches on top of the upstream 5.0.0 tag.

- `linux-sandbox` has changed a bunch (mostly in `linux-sandbox-pid1.cc`) so I had to rewrite a bunch of the patch. The main change is that upstream has added a bunch of logic from our own patch in order to support the new hermetic sandbox flag (see bazelbuild/bazel#13279). So I've cleaned things up, removed some of our code and instead called their new code. One change is that they hardlink an empty file outside of the sandbox rather than creating new files, which sounds ok. Note that we might be able to remove even more of our own patch in favor of their hermetic support but we can do that later.
- The merkle tree computation moved from `RemoteSpawnCache` to the `RemoteExecutionService` We should be able to rewrite the patch fairly easily but they've also added an (in-process) cache for those trees (see bazelbuild/bazel#13879) so it might be helping with the slowness that we were seeing before. I'm inclined to not apply the patch to start with and we can add it back if things get much slower.

The changes are on the `dbx-2022-02-25-5.0.0` branch in the bazel repo.

Here's the list of our own commits on top of upstream:
- [[ https://sourcegraph.pp.dropbox.com/bazel/-/commit/5a121a34b1a2a39530bf6cecc3892fc4509a1735?visible=2 | DBX: Helper scripts to build dbx bazel ]]
- [[ https://sourcegraph.pp.dropbox.com/bazel/-/commit/c3707dea392806b81f2892d46ede5bf54ef02527?visible=1 | DBX: Point remotejdk URLs to magic mirror ]]
- [[ https://sourcegraph.pp.dropbox.com/bazel/-/commit/dc5a85b9a1b710230f2c786fd2cede3adb29370d?visible=2 | DBX: Make sure that the java8 toolchain uses the right options ]]
- [[ https://sourcegraph.pp.dropbox.com/bazel/-/commit/497532f9878b3b68582c12766bf034a4de6cc44a?visible=6 | DBX: rootfs patch for the linux-sandbox ]]

Also see https://blog.bazel.build/2022/01/19/bazel-5.0.html

DTOOLS-1748

Test Plan:
Will run the main projects and CI and make sure that things still work.

Ran `bzl tool //dropbox/devtools/bazel_metrics/benchmarks --target //services/metaserver edit-refresh` on both this diff and master.

On 4.1.0 on master:

```
Running no-op reload 5 times...
Finished running no-op reload! The results were:
  min: 3.01s
  avg: 3.08s
  p50: 3.08s
  max: 3.21s
Running modify metaserver/static/js/modules/core/uri.ts 5 times...
Finished running modify metaserver/static/js/modules/core/uri.ts! The results were:
  min: 5.30s
  avg: 5.78s
  p50: 5.77s
  max: 6.59s
Running modify metaserver/static/css/legacy_browse.scss 5 times...
Finished running modify metaserver/static/css/legacy_browse.scss! The results were:
  min: 4.46s
  avg: 4.83s
  p50: 4.69s
  max: 5.26s
Running add file at metaserver/static/js/modules/core/devbox-benchmark-file-{}.ts 5 times...
Finished running add file at metaserver/static/js/modules/core/devbox-benchmark-file-{}.ts! The results were:
  min: 25.69s
  avg: 26.21s
  p50: 26.22s
  max: 26.89s
Running modify metaserver/static/error/maintenance.html 5 times...
Finished running modify metaserver/static/error/maintenance.html! The results were:
  min: 4.75s
  avg: 4.85s
  p50: 4.75s
  max: 5.01s
```

On 5.0.0

```
Running no-op reload 5 times...
Finished running no-op reload! The results were:
  min: 3.48s
  avg: 3.69s
  p50: 3.48s
  max: 3.90s
Running modify metaserver/static/js/modules/core/uri.ts 5 times...
Finished running modify metaserver/static/js/modules/core/uri.ts! The results were:
  min: 5.54s
  avg: 6.34s
  p50: 5.54s
  max: 8.59s
Running modify metaserver/static/css/legacy_browse.scss 5 times...
Finished running modify metaserver/static/css/legacy_browse.scss! The results were:
  min: 4.34s
  avg: 4.75s
  p50: 5.05s
  max: 5.46s
Running add file at metaserver/static/js/modules/core/devbox-benchmark-file-{}.ts 5 times...
Finished running add file at metaserver/static/js/modules/core/devbox-benchmark-file-{}.ts! The results were:
  min: 25.55s
  avg: 25.96s
  p50: 25.64s
  max: 26.71s
Running modify metaserver/static/error/maintenance.html 5 times...
Finished running modify metaserver/static/error/maintenance.html! The results were:
  min: 4.79s
  avg: 5.33s
  p50: 5.15s
  max: 5.84s
```

GitOrigin-RevId: 0f466c5a3bde9ed1157ea936bb70826b58f2fbec
@EricBurnett
Copy link

@coeuvre is there a plan, or evaluation you're looking for, to turn this on by default?

If anyone on this bug has data on the memory / latency tradeoff of turning on this cache for their own builds, especially at the default size (1000 from #14959), that might be useful.

(I ask only as an interested observer - if the cache is as useful as it sounds like, i.e. the experiment is considered a success, I'd like everyone to get the benefit of it by default and not have to know they need to manually opt in.)

@sluongng
Copy link
Contributor

I think a more mature way to approach this is:

  1. Provide a way to measure these adhoc cache out of the box. I have been looking into this during my off hours and I think we could include metrics of all the caffeine cache stores into buildMetrics event in bes.proto if --memory_profile is set. However that does require some work to wire up everything which I don't have the capacity today

  2. Provide a saner default caching strategy. Perhaps we could provide some sort of "adaptive" strategy where Bazel set the size of this cache conditionally based on some criterias? It's way better than 1000, which is a completely abitrary non-zero value to ensure caffeine cache performance.

If we have either one of the above, I think it would be a lot easier to transition this into non-experimental.

@coeuvre
Copy link
Member

coeuvre commented Sep 15, 2022

I am not confident enough to turn this on by default:

  • Discussion in "Upload missing inputs" performance regression in Bazel 5.0 and 5.3 #16054. I am not sure enabling this could improve it or make it worse.
  • I remember @tjgq did the benchmark of building Bazel with this flag turned on and the result was not promising. /cc @tjgq
  • I would like to see more data/benchmarks for this flag, not only for the wall time but also from the memory perspective.
  • and yes, 1000 is probably not a good default value.

I agree with what @sluongng said but I don't have the workload to work on the improvement.

@chris-codaio
Copy link

I just tried this experimental setting out on a typescript heavy codebase and it resulted in increasing our fully-remote-cached test run time from 1.5 minutes to near 10 minutes.

@moroten
Copy link
Contributor Author

moroten commented Oct 1, 2022

That sounds really bad. I've mostly been looking at the output from --profile --noslim_profile to find out where the bottlenecks have been on a high level. Is --experimental_remote_merkle_tree_cache_size=1000 too small? Also, I don't have experience of the typescript repositories and rules.

@moroten moroten deleted the cache-merkle-trees branch October 1, 2022 07:17
@chris-codaio
Copy link

FWIW, I tried bumping the cache size up to 4000 but saw no improvement.

@moroten
Copy link
Contributor Author

moroten commented Oct 12, 2022

The Remote Execution API meeting notes 2022-10-11 reads:

Using remote Merkle tree caching (--experimental_remote_merkle_tree_cache) outcome
Ed: The flag itself didn’t have a big impact, but cherry-picking Bazel PR#16118 did.

It seems like it is not working as intended. I haven't looked into it for a long time so I don't know if something in Bazel have changed in an unfortunate way for this patch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla: yes P1 I'll work on this now. (Assignee required) team-Remote-Exec Issues and PRs for the Execution (Remote) team type: feature request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Reuse input map and merkle tree for tools with lots of inputs
9 participants