Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add method to Engine to fetch max seq no of given SegmentInfos #5970

Merged
merged 3 commits into from
Jan 27, 2023

Conversation

sachinpkale
Copy link
Member

@sachinpkale sachinpkale commented Jan 23, 2023

Signed-off-by: Sachin Kale [email protected]

Description

  • Currently, Engine class does not expose a method which provides max sequence number that was part of last refresh.
  • Remote segment store, as part of providing refresh level durability, needs this info.
  • InternalEngine.lastRefreshedCheckpoint() does not guarantee to provide sequence number that is part of refreshed segments.
  • This creates issues like: [BUG] [Remote Store] Getting old sequence number after restoring data from remote segment store #5971
  • _seq_no field is part of each indexed document. We can use this to query refreshed segments and fetch the last sequence number.
  • We need to test the performance overhead though. As a part of this PR, I will run nyc_taxis workload from opensearch-benchmark and compare the performances.

Issues Resolved

Check List

  • New functionality includes testing.
    • All tests pass
  • New functionality has been documented.
    • New functionality has javadoc added
  • Commits are signed per the DCO using --signoff
  • Commit changes are listed out in CHANGELOG.md file (See: Changelog)

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

@github-actions
Copy link
Contributor

Gradle Check (Jenkins) Run Completed with:

  • RESULT: UNSTABLE ❕
  • TEST FAILURES:
      1 org.opensearch.cluster.routing.allocation.decider.DiskThresholdDeciderIT.testIndexCreateBlockWithAReadOnlyBlock
      1 org.opensearch.cluster.routing.allocation.decider.DiskThresholdDeciderIT.testIndexCreateBlockIsRemovedWhenAnyNodesNotExceedHighWatermark

@codecov-commenter
Copy link

Codecov Report

Merging #5970 (b415bef) into main (1ad344a) will decrease coverage by 0.17%.
The diff coverage is 0.00%.

@@             Coverage Diff              @@
##               main    #5970      +/-   ##
============================================
- Coverage     70.95%   70.78%   -0.17%     
+ Complexity    58829    58717     -112     
============================================
  Files          4771     4771              
  Lines        280817   280830      +13     
  Branches      40568    40571       +3     
============================================
- Hits         199253   198790     -463     
- Misses        65238    65702     +464     
- Partials      16326    16338      +12     
Impacted Files Coverage Δ
.../main/java/org/opensearch/index/engine/Engine.java 73.25% <0.00%> (-0.90%) ⬇️
...g/opensearch/index/analysis/CharFilterFactory.java 0.00% <0.00%> (-100.00%) ⬇️
...adonly/AddIndexBlockClusterStateUpdateRequest.java 0.00% <0.00%> (-75.00%) ⬇️
...readonly/TransportVerifyShardIndexBlockAction.java 9.75% <0.00%> (-73.18%) ⬇️
...n/admin/indices/readonly/AddIndexBlockRequest.java 17.85% <0.00%> (-53.58%) ⬇️
...ava/org/opensearch/action/NoSuchNodeException.java 0.00% <0.00%> (-50.00%) ⬇️
...a/org/opensearch/tasks/TaskCancelledException.java 50.00% <0.00%> (-50.00%) ⬇️
...adcast/BroadcastShardOperationFailedException.java 55.55% <0.00%> (-44.45%) ⬇️
...indices/readonly/TransportAddIndexBlockAction.java 20.68% <0.00%> (-41.38%) ⬇️
...regations/metrics/AbstractHyperLogLogPlusPlus.java 51.72% <0.00%> (-37.94%) ⬇️
... and 489 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

@github-actions
Copy link
Contributor

Gradle Check (Jenkins) Run Completed with:

  • RESULT: UNSTABLE ❕
  • TEST FAILURES:
      1 org.opensearch.cluster.routing.allocation.decider.DiskThresholdDeciderIT.testIndexCreateBlockWhenAllNodesExceededHighWatermark

* This method fetches the _id of last indexed document that was part of refresh and
* retrieves the _seq_no of the document.
*/
public long getMaxSeqNoRefreshed(String source) throws IOException {
Copy link
Member

@mch2 mch2 Jan 23, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been considering a similar change for segrep as well where we fetch the max seqNo contained in a set of segments, so that our seqNo passed to replicas is accurate to whats in the infos. A few questions/thoughts.

  1. Should we use a similar query to restoreVersionmapAndCheckpointTracker that uses a range query with a starting seqNo?
  2. IE is concurrently refreshing, meaning we could have a mismatch here depending on when the infos are fetched and this method is invoked while using acquireSearcher on the engine. What if we constructed a reader/IndexSearcher directly from the latest (to be uploaded) infos? Similar to NRTReplicationReaderManager:
        try (GatedCloseable<SegmentInfos> segmentInfosGatedCloseable = indexShard.getSegmentInfosSnapshot()) {
            SegmentInfos segmentInfos = segmentInfosGatedCloseable.get();
            DirectoryReader innerReader = StandardDirectoryReader.open(referenceToRefresh.directory(), segmentInfos, null, null);
            final IndexSearcher searcher = new IndexSearcher(innerReader);
            ...
        }

If we had a method that does this search based on the infos, we would reuse it with SR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we use a similar query to restoreVersionmapAndCheckpointTracker that uses a range query with a starting seqNo?

The query used in restoreVersionmapAndCheckpointTracker would be a bit expensive than the one we are using here. Any particular reason for using the same query?

IE is concurrently refreshing, meaning we could have a mismatch here depending on when the infos are fetched and this method is invoked while using acquireSearcher on the engine.

Right. Currently, this method does not make any assumption on SegmentInfos and how it is used in conjunction with either SegRep or Remote Store. The method always return the maxSeqNo based on searcher's view of segments. I will add more info on the javadoc clarifying this.

But I got your point and is valid for remote store as well. The solution you provided would work.
I am thinking of overloading this method which takes IndexSearcher as input. What do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would have to experiment but my thinking is the range query would be less expensive than the matchall, but I am no expert here in this optimization.

Overloading and taking in an IndexSearcher or even a SegmentInfos would be great.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 on 2nd point which Marc brought up. I think it makes sense to build methods that takes the segment infos that we are interested in. This way it is more deterministic.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Requested review from @msfroh to understand the performance implications.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added method to accept SegmentInfos. We no longer require to fetch max seq number of last refresh (if required, can be added later).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed the PR title accordingly.

@@ -2026,4 +2069,5 @@ public long getMaxSeenAutoIdTimestamp() {
* to advance this marker to at least the given sequence number.
*/
public abstract void advanceMaxSeqNoOfUpdatesOrDeletes(long maxSeqNoOfUpdatesOnPrimary);

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will remove this extra line.

@sachinpkale sachinpkale changed the title Add method to Engine to fetch max seq no of last refresh Add method to Engine to fetch max seq no of given SegmentInfos Jan 25, 2023
@github-actions
Copy link
Contributor

Gradle Check (Jenkins) Run Completed with:

@github-actions
Copy link
Contributor

Gradle Check (Jenkins) Run Completed with:

@sachinpkale
Copy link
Member Author

Following is the perf comparison:

Setup

  • Tool used: opensearch-benchmark
  • OpenSearch Benchmark Test: nyc_taxis
  • Cluster
    • 3 node cluster with each node - r5.2xlarge
  • Benchmark node - r5.2xlarge
  • Branch - sachinpkale:max-seq-no-refreshed
  • Baseline - Segment Replication + Remote Segment Store - Uses ((InternalEngine) indexShard.getEngine()).lastRefreshedCheckpoint() to populate UserData
  • Contender - Segment Replication + Remote Segment Store - Uses indexShard.getEngine().getMaxSeqNoFromSegmentInfos(segmentInfosSnapshot) to populate UserData

Results

|                                                        Metric |                     Task |    Baseline |   Contender |     Diff |   Unit |
|--------------------------------------------------------------:|-------------------------:|------------:|------------:|---------:|-------:|
|                    Cumulative indexing time of primary shards |                          |     147.727 |     151.006 |  3.27945 |    min |
|             Min cumulative indexing time across primary shard |                          |     47.0919 |     48.1206 |  1.02873 |    min |
|          Median cumulative indexing time across primary shard |                          |     49.9032 |     50.5576 |   0.6544 |    min |
|             Max cumulative indexing time across primary shard |                          |     50.7315 |     52.3278 |  1.59632 |    min |
|           Cumulative indexing throttle time of primary shards |                          |           0 |           0 |        0 |    min |
|    Min cumulative indexing throttle time across primary shard |                          |           0 |           0 |        0 |    min |
| Median cumulative indexing throttle time across primary shard |                          |           0 |           0 |        0 |    min |
|    Max cumulative indexing throttle time across primary shard |                          |           0 |           0 |        0 |    min |
|                       Cumulative merge time of primary shards |                          |      114.69 |     122.707 |  8.01703 |    min |
|                      Cumulative merge count of primary shards |                          |         143 |         143 |        0 |        |
|                Min cumulative merge time across primary shard |                          |     36.7336 |     32.9437 | -3.78992 |    min |
|             Median cumulative merge time across primary shard |                          |     38.3727 |     40.2747 |  1.90203 |    min |
|                Max cumulative merge time across primary shard |                          |     39.5833 |     49.4882 |  9.90492 |    min |
|              Cumulative merge throttle time of primary shards |                          |     77.6101 |     85.6929 |  8.08282 |    min |
|       Min cumulative merge throttle time across primary shard |                          |     23.9892 |     21.5101 | -2.47912 |    min |
|    Median cumulative merge throttle time across primary shard |                          |     26.1142 |     27.9193 |  1.80507 |    min |
|       Max cumulative merge throttle time across primary shard |                          |     27.5066 |     36.2635 |  8.75687 |    min |
|                     Cumulative refresh time of primary shards |                          |     9.73658 |     9.79232 |  0.05573 |    min |
|                    Cumulative refresh count of primary shards |                          |         199 |         197 |       -2 |        |
|              Min cumulative refresh time across primary shard |                          |     3.13953 |     3.17892 |  0.03938 |    min |
|           Median cumulative refresh time across primary shard |                          |     3.27632 |     3.25848 | -0.01783 |    min |
|              Max cumulative refresh time across primary shard |                          |     3.32073 |     3.35492 |  0.03418 |    min |
|                       Cumulative flush time of primary shards |                          |     7.65705 |     8.26045 |   0.6034 |    min |
|                      Cumulative flush count of primary shards |                          |          29 |          29 |        0 |        |
|                Min cumulative flush time across primary shard |                          |     2.20242 |     2.17825 | -0.02417 |    min |
|             Median cumulative flush time across primary shard |                          |      2.4122 |     2.30482 | -0.10738 |    min |
|                Max cumulative flush time across primary shard |                          |     3.04243 |     3.77738 |  0.73495 |    min |
|                                       Total Young Gen GC time |                          |       8.422 |       8.379 |   -0.043 |      s |
|                                      Total Young Gen GC count |                          |         441 |         436 |       -5 |        |
|                                         Total Old Gen GC time |                          |           0 |           0 |        0 |      s |
|                                        Total Old Gen GC count |                          |           0 |           0 |        0 |        |
|                                                    Store size |                          |     48.2749 |     48.1409 | -0.13404 |     GB |
|                                                 Translog size |                          | 3.07336e-07 | 3.07336e-07 |        0 |     GB |
|                                        Heap used for segments |                          |           0 |           0 |        0 |     MB |
|                                      Heap used for doc values |                          |           0 |           0 |        0 |     MB |
|                                           Heap used for terms |                          |           0 |           0 |        0 |     MB |
|                                           Heap used for norms |                          |           0 |           0 |        0 |     MB |
|                                          Heap used for points |                          |           0 |           0 |        0 |     MB |
|                                   Heap used for stored fields |                          |           0 |           0 |        0 |     MB |
|                                                 Segment count |                          |          77 |          86 |        9 |        |
|                                                Min Throughput |                    index |     71775.4 |     64740.8 | -7034.67 | docs/s |
|                                               Mean Throughput |                    index |     90100.7 |     85354.7 | -4745.99 | docs/s |
|                                             Median Throughput |                    index |       94181 |     89170.7 | -5010.33 | docs/s |
|                                                Max Throughput |                    index |      100649 |     97768.3 | -2881.07 | docs/s |
|                                       50th percentile latency |                    index |     490.946 |     494.675 |  3.72879 |     ms |
|                                       90th percentile latency |                    index |      1327.2 |     1476.18 |  148.984 |     ms |
|                                       99th percentile latency |                    index |     3098.96 |      3254.9 |  155.936 |     ms |
|                                     99.9th percentile latency |                    index |     7166.03 |     6656.26 | -509.771 |     ms |
|                                    99.99th percentile latency |                    index |     8525.74 |     9151.82 |  626.073 |     ms |
|                                      100th percentile latency |                    index |     8751.67 |     9359.02 |   607.35 |     ms |
|                                  50th percentile service time |                    index |     490.946 |     494.675 |  3.72879 |     ms |
|                                  90th percentile service time |                    index |      1327.2 |     1476.18 |  148.984 |     ms |
|                                  99th percentile service time |                    index |     3098.96 |      3254.9 |  155.936 |     ms |
|                                99.9th percentile service time |                    index |     7166.03 |     6656.26 | -509.771 |     ms |
|                               99.99th percentile service time |                    index |     8525.74 |     9151.82 |  626.073 |     ms |
|                                 100th percentile service time |                    index |     8751.67 |     9359.02 |   607.35 |     ms |
|                                                    error rate |                    index |           0 |           0 |        0 |      % |
|                                                Min Throughput | wait-until-merges-finish |  0.00214627 |  0.00117659 | -0.00097 |  ops/s |
|                                               Mean Throughput | wait-until-merges-finish |  0.00214627 |  0.00117659 | -0.00097 |  ops/s |
|                                             Median Throughput | wait-until-merges-finish |  0.00214627 |  0.00117659 | -0.00097 |  ops/s |
|                                                Max Throughput | wait-until-merges-finish |  0.00214627 |  0.00117659 | -0.00097 |  ops/s |
|                                      100th percentile latency | wait-until-merges-finish |      465924 |      849911 |   383986 |     ms |
|                                 100th percentile service time | wait-until-merges-finish |      465924 |      849911 |   383986 |     ms |
|                                                    error rate | wait-until-merges-finish |           0 |           0 |        0 |      % |
|                                                Min Throughput |                  default |     3.01409 |     3.01291 | -0.00118 |  ops/s |
|                                               Mean Throughput |                  default |     3.02291 |       3.021 | -0.00191 |  ops/s |
|                                             Median Throughput |                  default |      3.0209 |     3.01912 | -0.00177 |  ops/s |
|                                                Max Throughput |                  default |     3.04044 |     3.03702 | -0.00341 |  ops/s |
|                                       50th percentile latency |                  default |     8.89466 |     8.66104 | -0.23363 |     ms |
|                                       90th percentile latency |                  default |     9.71339 |     9.44533 | -0.26805 |     ms |
|                                       99th percentile latency |                  default |     10.9193 |     10.0244 | -0.89486 |     ms |
|                                      100th percentile latency |                  default |     11.6583 |     10.0296 | -1.62868 |     ms |
|                                  50th percentile service time |                  default |      7.7509 |     7.50835 | -0.24255 |     ms |
|                                  90th percentile service time |                  default |     8.60715 |     8.28762 | -0.31954 |     ms |
|                                  99th percentile service time |                  default |     9.95995 |     8.73689 | -1.22306 |     ms |
|                                 100th percentile service time |                  default |     10.1028 |     8.78321 | -1.31962 |     ms |
|                                                    error rate |                  default |           0 |           0 |        0 |      % |
|                                                Min Throughput |                    range |    0.704271 |    0.704084 | -0.00019 |  ops/s |
|                                               Mean Throughput |                    range |    0.707028 |    0.706718 | -0.00031 |  ops/s |
|                                             Median Throughput |                    range |    0.706392 |    0.706111 | -0.00028 |  ops/s |
|                                                Max Throughput |                    range |    0.712707 |    0.712144 | -0.00056 |  ops/s |
|                                       50th percentile latency |                    range |     63.3486 |     61.2342 | -2.11436 |     ms |
|                                       90th percentile latency |                    range |     69.0348 |     69.5296 |  0.49481 |     ms |
|                                       99th percentile latency |                    range |     76.5362 |     73.4017 | -3.13446 |     ms |
|                                      100th percentile latency |                    range |      77.217 |     74.8675 | -2.34954 |     ms |
|                                  50th percentile service time |                    range |     61.1679 |     59.1644 | -2.00346 |     ms |
|                                  90th percentile service time |                    range |     66.5662 |     67.4326 |  0.86633 |     ms |
|                                  99th percentile service time |                    range |     73.9934 |     71.2109 | -2.78251 |     ms |
|                                 100th percentile service time |                    range |     74.9905 |     72.9895 | -2.00097 |     ms |
|                                                    error rate |                    range |           0 |           0 |        0 |      % |
|                                                Min Throughput |      distance_amount_agg |     2.00988 |     2.01014 |  0.00026 |  ops/s |
|                                               Mean Throughput |      distance_amount_agg |     2.01622 |     2.01667 |  0.00044 |  ops/s |
|                                             Median Throughput |      distance_amount_agg |     2.01476 |     2.01515 |  0.00039 |  ops/s |
|                                                Max Throughput |      distance_amount_agg |     2.02915 |     2.02992 |  0.00076 |  ops/s |
|                                       50th percentile latency |      distance_amount_agg |     6.54403 |     6.55942 |  0.01539 |     ms |
|                                       90th percentile latency |      distance_amount_agg |      7.1117 |     7.17226 |  0.06056 |     ms |
|                                       99th percentile latency |      distance_amount_agg |     9.01084 |     8.54284 | -0.46801 |     ms |
|                                      100th percentile latency |      distance_amount_agg |     9.84544 |     8.61598 | -1.22947 |     ms |
|                                  50th percentile service time |      distance_amount_agg |     5.24882 |     5.24152 |  -0.0073 |     ms |
|                                  90th percentile service time |      distance_amount_agg |     5.56447 |     5.60493 |  0.04046 |     ms |
|                                  99th percentile service time |      distance_amount_agg |     7.32996 |     7.17043 | -0.15953 |     ms |
|                                 100th percentile service time |      distance_amount_agg |     8.15503 |     7.55916 | -0.59587 |     ms |
|                                                    error rate |      distance_amount_agg |           0 |           0 |        0 |      % |
|                                                Min Throughput |            autohisto_agg |     1.50394 |     1.50537 |  0.00143 |  ops/s |
|                                               Mean Throughput |            autohisto_agg |     1.50639 |     1.50883 |  0.00244 |  ops/s |
|                                             Median Throughput |            autohisto_agg |     1.50583 |     1.50803 |  0.00219 |  ops/s |
|                                                Max Throughput |            autohisto_agg |     1.51127 |     1.51584 |  0.00457 |  ops/s |
|                                       50th percentile latency |            autohisto_agg |     183.824 |     194.407 |  10.5826 |     ms |
|                                       90th percentile latency |            autohisto_agg |     193.903 |     202.552 |  8.64868 |     ms |
|                                       99th percentile latency |            autohisto_agg |     207.867 |     210.321 |   2.4532 |     ms |
|                                      100th percentile latency |            autohisto_agg |     208.761 |      226.93 |  18.1684 |     ms |
|                                  50th percentile service time |            autohisto_agg |     182.368 |     192.865 |   10.497 |     ms |
|                                  90th percentile service time |            autohisto_agg |      192.23 |     200.893 |  8.66254 |     ms |
|                                  99th percentile service time |            autohisto_agg |     205.914 |     207.639 |  1.72489 |     ms |
|                                 100th percentile service time |            autohisto_agg |     207.313 |     224.649 |  17.3359 |     ms |
|                                                    error rate |            autohisto_agg |           0 |           0 |        0 |      % |
|                                                Min Throughput |       date_histogram_agg |     1.50692 |      1.5068 | -0.00012 |  ops/s |
|                                               Mean Throughput |       date_histogram_agg |     1.51142 |     1.51124 | -0.00019 |  ops/s |
|                                             Median Throughput |       date_histogram_agg |      1.5104 |     1.51024 | -0.00016 |  ops/s |
|                                                Max Throughput |       date_histogram_agg |     1.52057 |      1.5202 | -0.00037 |  ops/s |
|                                       50th percentile latency |       date_histogram_agg |     198.513 |     203.547 |  5.03323 |     ms |
|                                       90th percentile latency |       date_histogram_agg |     208.905 |     209.999 |  1.09398 |     ms |
|                                       99th percentile latency |       date_histogram_agg |     219.724 |     223.711 |  3.98719 |     ms |
|                                      100th percentile latency |       date_histogram_agg |     220.613 |     231.026 |  10.4131 |     ms |
|                                  50th percentile service time |       date_histogram_agg |     197.186 |     202.209 |  5.02375 |     ms |
|                                  90th percentile service time |       date_histogram_agg |     207.466 |     208.812 |  1.34523 |     ms |
|                                  99th percentile service time |       date_histogram_agg |      217.02 |     222.797 |  5.77662 |     ms |
|                                 100th percentile service time |       date_histogram_agg |     219.206 |     229.829 |  10.6236 |     ms |
|                                                    error rate |       date_histogram_agg |           0 |           0 |        0 |      % |

@sachinpkale
Copy link
Member Author

@mch2 Please review the perf test results posted above. I have specifically checked for refresh time and indexing throughput and haven't observed much degradation.

@mch2
Copy link
Member

mch2 commented Jan 26, 2023

Baseline - Segment Replication + Remote Segment Store - Uses ((InternalEngine) indexShard.getEngine()).lastRefreshedCheckpoint() to populate UserData
Contender - Segment Replication + Remote Segment Store - Uses indexShard.getEngine().getMaxSeqNoFromSegmentInfos(segmentInfosSnapshot) to populate UserData

The user data is only updated at lucene commit / flush time, so I wouldn't expect this would impact refresh times / throughput much. Are you computing user data in your test in a different spot? With that said I see no impact to cumulative flush time.

I think we should do a query comparison of the matchall vs range but think its ok to start as is. We can do this after integrating with SR & remote store flows.

Copy link
Member

@mch2 mch2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this change @sachinpkale. Lets discuss in a separate issue refactoring our checkpoint listeners so we aren't computing infos snapshots & this seqNo twice per refresh.

@gbbafna gbbafna merged commit 750bfc6 into opensearch-project:main Jan 27, 2023
@gbbafna gbbafna added the backport 2.x Backport to 2.x branch label Jan 27, 2023
opensearch-trigger-bot bot pushed a commit that referenced this pull request Jan 27, 2023
* Add method to Engine to fetch max seq no of last refresh
Co-authored-by: Sachin Kale <[email protected]>

(cherry picked from commit 750bfc6)
Signed-off-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
sachinpkale added a commit to sachinpkale/OpenSearch that referenced this pull request Jan 27, 2023
…earch-project#5970)

* Add method to Engine to fetch max seq no of last refresh
Co-authored-by: Sachin Kale <[email protected]>
mch2 pushed a commit to mch2/OpenSearch that referenced this pull request Mar 4, 2023
…earch-project#5970)

* Add method to Engine to fetch max seq no of last refresh
Co-authored-by: Sachin Kale <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport 2.x Backport to 2.x branch skip-changelog
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants