Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MINOR: Remove throttling logic from RecordAccumulator #7195

Merged
merged 5 commits into from
Mar 9, 2020

Conversation

ijuma
Copy link
Contributor

@ijuma ijuma commented Aug 11, 2019

This is redundant since Sender and NetworkClient handle throttling. It's
also confusing since the RecordAccumulator logic only applies when
max.in.flight.requests.per.connection=1.

In Sender.sendProducerData, the following code handles throttling:

while (iter.hasNext()) {
    Node node = iter.next();
    if (!this.client.ready(node, now)) {
        iter.remove();
        notReadyTimeout = Math.min(notReadyTimeout, this.client.pollDelayMs(node, now));
    }
}

Committer Checklist (excluded from commit message)

  • Verify design and implementation
  • Verify test coverage and CI build status
  • Verify documentation (including upgrade notes)

@ijuma ijuma requested a review from rajinisivaram August 11, 2019 16:15
@ijuma
Copy link
Contributor Author

ijuma commented Aug 11, 2019

One thing I don't understand is why this logic only runs if max.in.flight=1. cc @lbradstreet

@ijuma ijuma force-pushed the record-accumulator-unmute-simplification branch 3 times, most recently from eca22b2 to 3a137c5 Compare August 17, 2019 21:55
@@ -588,13 +587,13 @@ private void handleProduceResponse(ClientResponse response, Map<TopicPartition,
TopicPartition tp = entry.getKey();
ProduceResponse.PartitionResponse partResp = entry.getValue();
ProducerBatch batch = batches.get(tp);
completeBatch(batch, partResp, correlationId, now, receivedTimeMs + produceResponse.throttleTimeMs());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the idea that we can remove the throttling here, and rely on the existing NetworkClient throttle instead?

Copy link
Contributor Author

@ijuma ijuma Aug 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. This logic was only active for max.inflight=1 which is confusing and seemingly unnecessary. We have logic to remove nodes from readyNodes if the client is not ready.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good. It does look pretty unnecessary to me too.

@ijuma
Copy link
Contributor Author

ijuma commented Aug 18, 2019

Will fix the checkstyle error.

@ijuma ijuma changed the title MINOR: RecordAccumulator.unmute should add to map if throttling happened MINOR: Remove throttling logic from RecordAccumulator Aug 18, 2019
@ijuma ijuma requested a review from junrao August 18, 2019 01:37
@ijuma
Copy link
Contributor Author

ijuma commented Aug 18, 2019

@junrao Does this look good to you?

@ijuma
Copy link
Contributor Author

ijuma commented Aug 18, 2019

Retest this please. One build passed, two timed out.

@ijuma ijuma force-pushed the record-accumulator-unmute-simplification branch from ad079de to f9ae778 Compare January 29, 2020 08:57
@ijuma
Copy link
Contributor Author

ijuma commented Jan 29, 2020

One job passed and the other failed with two known flakes:

kafka.admin.ResetConsumerGroupOffsetTest.testResetOffsetsExportImportPlan
org.apache.kafka.connect.mirror.MirrorConnectorsIntegrationTest.testReplication

@rajinisivaram does this look good to you?

Copy link
Contributor

@rajinisivaram rajinisivaram left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ijuma Thanks for the PR, LGTM

@ijuma ijuma merged commit cdd5ef3 into apache:trunk Mar 9, 2020
@ijuma ijuma deleted the record-accumulator-unmute-simplification branch March 9, 2020 18:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants