Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Circuit-break based on real memory usage #31767

Merged

Conversation

danielmitterdorfer
Copy link
Member

@danielmitterdorfer danielmitterdorfer commented Jul 3, 2018

With this commit we introduce a new circuit-breaking strategy to the parent
circuit breaker. Contrary to the current implementation which only accounts for
memory reserved via child circuit breakers, the new strategy measures real heap
memory usage at the time of reservation. This allows us to be much more
aggressive with the circuit breaker limit so we bump it to 95% by default. The
new strategy is turned on by default and can be controlled with the new cluster
setting indices.breaker.total.use_real_memory.

Note that we turn it off for all integration tests with an internal test cluster
because it leads to spurious test failures which are of no value (we cannot
fully control heap memory usage in tests). All REST tests, however, will make
use of the real memory circuit breaker.

With this commit we introduce a new circuit-breaking strategy to the
parent circuit breaker. Contrary to the current implementation which
only accounts for memory reserved via child circuit breakers, the new
strategy measures real heap memory usage at the time of reservation.
This allows us to be much more aggressive with the circuit breaker limit
so we bump it to 95% by default.

The new strategy can be turned on with the new cluster setting
`indices.breaker.total.userealmemory`. It is on by default if the heap
size is smaller than 1GB, otherwise it is turned off.
@danielmitterdorfer danielmitterdorfer added >enhancement review :Core/Infra/Circuit Breakers Track estimates of memory consumption to prevent overload v7.0.0 labels Jul 3, 2018
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-core-infra

@danielmitterdorfer
Copy link
Member Author

danielmitterdorfer commented Jul 3, 2018

A few more comments on this change for reviewers:

Initially I was worried about the overhead of memoryMXBean.getHeapMemoryUsage().getUsed() for every check in the parent breaker. Therefore I created a microbenchmark (see MemoryStatsBenchmark) and ran it on several machines and different JDK versions:

CPU JDK Duration per call - 1 thread [ns] Duration per call - 64 threads [ns]
Intel(R) Xeon(R) CPU E3-1246 v3 @ 3.50GHz (8 cores with HT) 1.8.0_131-b11 358 ± 11 --
Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz (64 cores with HT) 10.0.1+10 405 ± 1 885 ± 10

CPUs ran at their base frequency with the performance CPU governor to reduce measurement noise.

So we can expect an overhead of several hundred nanoseconds per call even if we call that operation from lots of threads concurrently. Therefore, I concluded that this operation is cheap enough to call for every check in the parent breaker.

I also tested the effectiveness of this new circuit breaker strategy with several macrobenchmarks using the Elasticsearch default distribution with X-Pack Security enabled. I set the heap size for Elasticsearch to 256MB and with the new circuit breaker strategy it could absorb the load in all but one of the benchmarks in our macrobenchmarking suite (bulk-indexing and querying). Of course, the clients got request errors from Elasticsearch - for the full-text benchmark (pmc) as high as 30% - because the circuit breaker did its job. The point of these benchmarks was to show that even if we put a lot of pressure on Elasticsearch it can sustain. In the single node case (one shard, zero replicas), Elasticsearch never had an OutOfMemoryError. In the three-node case (one shard, one replica) we had one node die in the nyc_taxis benchmark while bulk-indexing. Perhaps a bulk-size of 10.000 documents and 8 concurrent clients was still too much but we plan to attack this with follow-up PRs.

Finally, I did a dedicated test to see how Elasticsearch behaves when an aggregation produces a crazy amount of buckets on a larger heap (16GB). I ingested the whole http_logs corpus (31GB of source documents). Then I intentionally set search.max_buckets: 1000000000 and ran the following aggregation:

{
  "aggs": {
    "size_over_time": {
      "date_histogram": {
        "field": "@timestamp",
        "interval": "second"
      },
      "aggs": {
        "sizes": {
          "percentiles": {
            "field": "size",
            "tdigest": {
              "compression": 1000
            }
          }
        }
      }
    }
  }
}
Configuration allow_partial_search_results=true allow_partial_search_results = false
With this PR response after 73 seconds response after 21 minutes
Without this PR node went OOM after 22 minutes node went OOM after 28 minutes

While this PR greatly improves the situation w.r.t OutOfMemoryErrors in Elasticsearch, this approach is still not perfect. As we do not fully manage heap allocations ourselves (e.g. in a pool) but rather rely on Java's new operator, reservation of memory in the circuit breaker and the actual allocation do not occur atomically. This means that it is still possible that the circuit breaker allows a reservation but if the time window between reservation and allocation is large enough, it is still possible that OutOfMemoryErrors happen.

@@ -44,10 +47,24 @@

private static final String CHILD_LOGGER_PREFIX = "org.elasticsearch.indices.breaker.";

private static final MemoryMXBean memoryMXBean = ManagementFactory.getMemoryMXBean();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we put this var name in all caps since it's static and final? We tend to do that all over the place

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course. That was definitely unintentional.

private final ConcurrentMap<String, CircuitBreaker> breakers = new ConcurrentHashMap<>();

public static final Setting<Boolean> USE_REAL_MEMORY_USAGE_SETTING =
Setting.boolSetting("indices.breaker.total.userealmemory", settings -> {
ByteSizeValue maxHeapSize = new ByteSizeValue(memoryMXBean.getHeapMemoryUsage().getMax());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To clarify, memoryMXBean.getHeapMemoryUsage().getMax() returns the maximum configured, not the maximum currently sized right? If someone did:

-Xms1g
-Xmx8g

They aren't going to get 1.37gb because that happens to be what the JVM is currently sized as for the "max"? (In otherwords, there's no heap resizing issue with this right?)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It returns the maximum configured heap, i.e. they would get 8589934592 (i.e. 8GB) in your example (I verified this).

for (CircuitBreaker breaker : this.breakers.values()) {
parentEstimated += breaker.getUsed() * breaker.getOverhead();
}
return parentEstimated;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this need to be parentEstimated + newBytesReserved? We add it to the real memory usage if tracking memory

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, because this is for the current strategy which sums up the total memory reserved by all child circuit breakers. As the corresponding child circuit breaker accounts for that amount of memory already, we do not need to do that again in the parent breaker.

For the new strategy which is based on real memory usage, we do not rely on the child memory circuit breakers but rather only on current memory usage. Hence, we need to consider the amount of memory that is about to be reserved here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@@ -396,6 +396,14 @@ private Settings getRandomNodeSettings(long seed) {
builder.put(MappingUpdatedAction.INDICES_MAPPING_DYNAMIC_TIMEOUT_SETTING.getKey(), new TimeValue(RandomNumbers.randomIntBetween(random, 10, 30), TimeUnit.SECONDS));
}

if (random.nextBoolean()) {
builder.put(HierarchyCircuitBreakerService.USE_REAL_MEMORY_USAGE_SETTING.getKey(), true);
// allow full allocation in tests to avoid breaking
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious, were we breaking tests somewhere with this set to 95%? It seems like we shouldn't hit 95% with our regular tests?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do. I have the impression that this is mainly caused by two facts:

  1. We run multiple integration tests in the same JVM process so garbage will add up.
  2. We do not set a garbage collection algorithm in the integration tests (except for our g1 builds via -Dtests.jvm.argline in the Jenkins configuration). This means we let the JVM choose the algorithm. For the Java 10 builds this is G1, for Java 8 it is ParallelGC. From my tests I had the impression that especially G1 had trouble keeping up with allocation in our tests (as G1 cleans up concurrently) and I experienced circuit breaker issues basically all the time.

I am not sure how we should move forward here in general. G1 is not really optimal for such small heaps but on the other hand we should also run our tests with G1. I already added this to our group's agenda to discuss. Finally, the 95% are heap usage + amount to be reserved.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Finally, the 95% are heap usage + amount to be reserved.

I suppose in that case it would force a real GC if the heap were at 95% and then a new amount was reserved using new

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually the GC should get triggered even earlier. For CMS we have configured -XX:CMSInitiatingOccupancyFraction=75. But as the garbage collector is running concurrently with the application, it can still allocate while the GC is active.

The parent-level breaker can be configured with the following setting:
The parent-level breaker can be configured with the following settings:

`indices.breaker.total.userealmemory`::
Copy link
Member

@dakrone dakrone Jul 3, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should rename this setting to be

indices.breaker.total.use_real_memory

Since we usually use _ when using multiple words for setting names

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. I'll change it.

Copy link
Member

@dakrone dakrone left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left some comments, excited for this to get in though!


Whether the parent breaker should take real memory usage into account (`true`) or only
consider the amount that is reserved by child circuit breakers (`false`). Defaults to `true`
if the JVM heap size is smaller than 1GB, otherwise `false`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As we talked about today, I think we may want to think about why this defaults to off and put it in the documentation (or at least mention it in this PR). Right now it comes off as more of "it's off by default for > 1gb heaps for no reason in particular" which seems strange given that we're trying to be as safe as possible by default

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am happy to change the default to the real memory circuit breaker for all cases but initially I wanted to be a bit more conservative. I felt that for larger deployments it is ok to stick with the current implementation. I also thought that it might be one thing less to consider in a major version upgrade if we keep the default for now for those deployments.

My idea behind changing it only for lower heap sizes below 1GB deviation between real memory usage and actively tracked memory usage starts to matter more and we want to ensure we push back accordingly. I did not choose this heap size by coincidence but based on our benchmarks. With 1GB heap, Elasticsearch can handle our macrobenchmark suite but it starts to struggle below that point (e.g. for 768MB).

I could ask how the rest of the team feels turning it on in all cases. Wdyt?

With this commit we request to run a garbage collection after the
integration test cluster has started. This avoids garbage from piling up
and thus we avoid that the real memory circuit breaker trips.
@danielmitterdorfer
Copy link
Member Author

@elasticmachine test this please.

Copy link
Member

@dakrone dakrone left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My idea behind changing it only for lower heap sizes below 1GB deviation between real memory usage and actively tracked memory usage starts to matter more and we want to ensure we push back accordingly. I did not choose this heap size by coincidence but based on our benchmarks. With 1GB heap, Elasticsearch can handle our macrobenchmark suite but it starts to struggle below that point (e.g. for 768MB).

I could ask how the rest of the team feels turning it on in all cases. Wdyt?

I think that would be good to pursue, but that doesn't need to block this PR, I think we can discuss it for a follow-up.

LGTM assuming CI is happy :)

for (CircuitBreaker breaker : this.breakers.values()) {
parentEstimated += breaker.getUsed() * breaker.getOverhead();
}
return parentEstimated;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@@ -396,6 +396,14 @@ private Settings getRandomNodeSettings(long seed) {
builder.put(MappingUpdatedAction.INDICES_MAPPING_DYNAMIC_TIMEOUT_SETTING.getKey(), new TimeValue(RandomNumbers.randomIntBetween(random, 10, 30), TimeUnit.SECONDS));
}

if (random.nextBoolean()) {
builder.put(HierarchyCircuitBreakerService.USE_REAL_MEMORY_USAGE_SETTING.getKey(), true);
// allow full allocation in tests to avoid breaking
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Finally, the 95% are heap usage + amount to be reserved.

I suppose in that case it would force a real GC if the heap were at 95% and then a new amount was reserved using new

With this commit we turn off the real memory circuit breaker for all
integration tests with an internal test cluster because it leads to
spurious test failures which are of no value (we cannot fully control
heap memory usage in tests)
@danielmitterdorfer
Copy link
Member Author

@dakrone after our discussion yesterday I pushed the following changes:

  • As we cannot fully control heap memory usage in the JVM-internal cluster tests and their purpose is also not testing circuit breakers (that's done by the REST tests), I have disabled the real-memory circuit breaker there.
  • I have enabled the real memory circuit breaker by default now for all heap sizes. Consequently, I marked the PR now as "breaking".

Can you please have one more look at these changes and give your ok if you're fine with them?

Also, in one of my (very many) test runs I spotted the following test error:

REPRODUCE WITH: ./gradlew :server:integTest -Dtests.seed=71A44F93AD4E637B -Dtests.class=org.elasticsearch.search.aggregations.metrics.HDRPercentileRanksIT -Dtests.method="testScriptSingleValuedWithParams" -Dtests.security.manager=true -Dtests.locale=zh -Dtests.timezone=Asia/Saigon
FAILURE 0.16s J3 | HDRPercentileRanksIT.testScriptSingleValuedWithParams <<< FAILURES!
   > Throwable #1: java.lang.AssertionError: Count is 8 but 10 was expected.  Total shards: 9 Successful shards: 8 & 1 shard failures:
   >  shard [[DcF7TxxQQe-8jwOROrfM5A][idx][8]], reason [RemoteTransportException[[node_s0][127.0.0.1:62368][indices:data/read/search[phase/query/id]]]; nested: IllegalArgumentException[committed = 542113792 should be < max = 536870912]; ], cause [java.lang.IllegalArgumentException: committed = 542113792 should be < max = 536870912
   >    at java.lang.management.MemoryUsage.<init>(MemoryUsage.java:166)
   >    at sun.management.MemoryImpl.getMemoryUsage0(Native Method)
   >    at sun.management.MemoryImpl.getHeapMemoryUsage(MemoryImpl.java:71)
   >    at org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService.currentMemoryUsage(HierarchyCircuitBreakerService.java:246)
   >    at org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService.parentUsed(HierarchyCircuitBreakerService.java:234)
   >    at org.elasticsearch.indices.breaker.HierarchyCircuitBreakerService.checkParentLimit(HierarchyCircuitBreakerService.java:253)
[...]

This happened on MacOS 10.12.6 with JDK 10 (build 10.0.1+10). The stack trace points to an issue in the (native code of the) JDK. After analyzing the code, I suspect that the problem is the calculation of committed, not max, because max matches the 512MB heap size that we specify for our tests to the byte but committed is 517MB. Searching for related issues in the OpenJDK bug tracker only reveal an issue with non-heap memory but not with heap-memory. I am not aware of any prior discussion on the OpenJDK mailing lists about this topic. However, there is evidence that this happened before as well:

I suggest that we handle this as follows:

  • Provided you agree, I will merge this PR to master as is, i.e. I will intentionally merge it with the knowledge that we are now exposed to this JDK bug on master. The idea is that this will give us more CI coverage in the hope that we find conditions under which this is reproducible. We probably need to turn on JVM logging to get more info.
  • I will follow-up on the OpenJDK mailing list asking for feedback how to proceed. I doubt that an issue in the OpenJDK repo will be raised at this point, simply because I could not reproduce it. My hope is that we get some feedback that will help us to get to the bottom of this together with the OpenJDK developers so we can finally raise a ticket against OpenJDK.
  • In case we do not get to the bottom of this before 7.0.0 is out, we should implement a workaround along the lines of:
    long currentMemoryUsage() {
        try {
            return MEMORY_MX_BEAN.getHeapMemoryUsage().getUsed();
        } catch (IllegalArgumentException ex) {
            logger.warn("Could not determine real memory usage due to JDK issue.", ex);
            return 0L;
        }
    }

I think it is better to avoid the IllegalArgumentException bubbling up in this case.

As discussed yesterday, this change will also be backported to 6.x but the real memory circuit breaker will be turned off by default (I'll create a separate PR for this). I suggest that we integrate this workaround already in the backport.

Copy link
Member

@dakrone dakrone left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This LGTM, the steps you outlined in regard to the JDK bug sound good as well.

I noticed the Netbeans bug report is for OSX also, so perhaps it's an OSX-only issue?

@danielmitterdorfer danielmitterdorfer merged commit f174f72 into elastic:master Jul 13, 2018
@danielmitterdorfer
Copy link
Member Author

Thank you for the review @dakrone! The JDK issue that I mentioned above is now raised against the OpenJDK repo in https://bugs.openjdk.java.net/browse/JDK-8207200.

danielmitterdorfer added a commit to danielmitterdorfer/elasticsearch that referenced this pull request Jul 16, 2018
With this commit we disable the real-memory circuit breaker in REST
tests as this breaker is based on real memory usage over which we have
no (full) control in tests and the REST client is not yet ready to retry
on circuit breaker exceptions.

This is only meant as a temporary measure to avoid spurious test
failures while we ensure that the REST client can handle those
situations appropriately.

Closes elastic#32050
Relates elastic#31767
Relates elastic#31986
danielmitterdorfer added a commit that referenced this pull request Jul 16, 2018
With this commit we disable the real-memory circuit breaker in REST
tests as this breaker is based on real memory usage over which we have
no (full) control in tests and the REST client is not yet ready to retry
on circuit breaker exceptions.

This is only meant as a temporary measure to avoid spurious test
failures while we ensure that the REST client can handle those
situations appropriately.

Closes #32050
Relates #31767
Relates #31986 
Relates #32074
danielmitterdorfer added a commit that referenced this pull request Jul 16, 2018
With this commit we raise the limit of the child circuit breaker used in
the unit test for the circuit breaker service so it is high enough to trip
only the parent circuit breaker. The previous limit was 300 bytes but
theoretically (considering overhead) we could reach 346 bytes. Thus any
value larger than 300 bytes could trip the child circuit breaker leading
to spurious failures.

Relates #31767
danielmitterdorfer added a commit to danielmitterdorfer/elasticsearch that referenced this pull request Aug 24, 2018
With this commit we implement a workaround for
https://bugs.openjdk.java.net/browse/JDK-8207200 which is a race
condition in the JVM that results in `IllegalArgumentException` to be
thrown in rare cases when we determine memory usage via `MemoryMXBean`.
As we do not want to fail requests in those cases we always return zero
memory usage.

Relates elastic#31767
danielmitterdorfer added a commit that referenced this pull request Aug 27, 2018
With this commit we implement a workaround for
https://bugs.openjdk.java.net/browse/JDK-8207200 which is a race
condition in the JVM that results in `IllegalArgumentException` to be
thrown in rare cases when we determine memory usage via `MemoryMXBean`.
As we do not want to fail requests in those cases we always return zero
memory usage.

Relates #31767
Relates #33125
@Bukhtawar
Copy link
Contributor

When are we planning on back porting this to 6.x versions even now that the OpenJDK bug stands resolved

@yakirgb
Copy link

yakirgb commented Sep 11, 2019

Hi, we are using zing jdk11 and we getting errors from breaker about "Data too large...which is larger than the limit of 44GB":

[2019-09-11T06:19:52,728][WARN ][o.e.a.b.TransportShardBulkAction] [es002.tab.com] [[query_performance_2019.09.10][0]] failed to perform indices:data/write/bulk[s] on replica [query_performance_2019.09.10][0], node[Uc_gz1L9RlKzJrDIuDDPNg], [R], s[STARTED], a[id=ws-0IaCuRQWhSTdq7hVsvQ]
org.elasticsearch.transport.RemoteTransportException: [es003.tab.com][1.2.3.4:9300][indices:data/write/bulk[s][r]]
Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for [<transport_request>] would be [47571970700/44.3gb], which is larger than the limit of [47173546803/43.9gb], real usage: [47548727296/44.2gb], new bytes reserved: [23243404/22.1mb], usages [request=0/0b, fielddata=1005/1005b, in_flight_requests=23243404/22.1mb, accounting=8966251/8.5mb]

From client side:

{“error”:{“root_cause”:[{“type”:“circuit_breaking_exception”,
“reason”:“[parent] Data too large, data for [] would be [49458019616/46gb], 
which is larger than the limit of [47173546803/43.9gb], real usage: [49450844160/46gb], 
new bytes reserved: [7175456/6.8mb], 
usages [request=0/0b, 
fielddata=832/832b, 
in_flight_requests=7175456/6.8mb, 
accounting=19238170/18.3mb]“,
”bytes_wanted”:49458019616,
“bytes_limit”:47173546803,
“durability”:“PERMANENT”}

jdk version:

[root@es-data001 ~]# java --version
java 11.0.3.0.101 2019-07-24 LTS
Zing Runtime Environment for Java Applications 19.07.0.0+3 (product build 11.0.3.0.101+12-LTS)
Zing 64-Bit Tiered VM 19.07.0.0+3 (product build 11.0.3.0.101-zing_19.07.0.0-b4-product-azlinuxM-X86_64, mixed mode)

breakers stats:

      "breakers": {
        "request": {
          "limit_size_in_bytes": 29793819033,
          "limit_size": "27.7gb",
          "estimated_size_in_bytes": 0,
          "estimated_size": "0b",
          "overhead": 1,
          "tripped": 0
        }

Zing conf:

[root@es002 ~]# zing-ps --acct -h

 System Zing Memory reserved at configuration (reserve-at-config)
Fund breakdown0
                              NAME    BALANCE    MAXIMUM  COMMITTED
       fund[0]:          Committed     6144 M       56 G       56 G
       fund[1]:          Overdraft     4096 M     4096 M     4096 M
       fund[3]:    PausePrevention     4096 M     4096 M     4096 M

Found 1 process
USER         PID PROCESS
elasticsearch 170166 /opt/zing/zing-jdk11/bin/java
                                  NAME    BALANCE  ALLOCATED    MAXIMUM  FND  ODFND
    account[0]:                default      947 M     4728 K      952 M    0      1
    account[2]:                   heap       42 G     7598 M          -    0      1
    account[3]:       pause_prevention         0          0           -    3      3

JVM options:

[root@es002 ~]# cat /etc/elasticsearch/dba/jvm.options
-Dfile.encoding=UTF-8
-Dio.netty.noKeySetOptimization=true
-Dio.netty.noUnsafe=true
-Dio.netty.recycler.maxCapacityPerThread=0
-Djava.awt.headless=true
-Djna.nosys=true
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true
-XX:+AlwaysPreTouch
-XX:+HeapDumpOnOutOfMemoryError
-XX:+UseCMSInitiatingOccupancyOnly
-XX:-OmitStackTraceInFastThrow
-XX:CMSInitiatingOccupancyFraction=75
-Xms50g
-Xmx50g
-Xss1m
-server

@danielmitterdorfer @dakrone do you have any idea how to debug the issue?
Thanks, Yakir.

@henningandersen
Copy link
Contributor

Notice that the question on zing was dealt with on discuss.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>breaking :Core/Infra/Circuit Breakers Track estimates of memory consumption to prevent overload >enhancement v7.0.0-beta1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants