Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make #16284 backward compatible #16334

Merged
merged 6 commits into from
Apr 29, 2024

Conversation

adithyachakilam
Copy link
Contributor

In #16284, we changed the way how kinesis autoscaler computes the lag for the purposes of autoscaling. Put that feature behind a flag so as to not annoy customers who have already configured the auto scaler thesholds based on the total lag.

This PR has:

  • been self-reviewed.
  • added documentation for new or modified features or behaviors.
  • a release note entry in the PR description.
  • added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links.
  • added or updated version, license, or notice information in licenses.yaml
  • added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
  • added unit tests or modified existing tests to cover new code paths, ensuring the threshold for code coverage is met.
  • added integration tests.
  • been tested in a test Druid cluster.

long lag = lagBasedAutoScalerConfig.getLagStatsType() != null ?
lagStats.getMetric(lagBasedAutoScalerConfig.getLagStatsType()) :
lagStats.getPrefferedScalingMetric();
lagMetricsQueue.offer(lag > 0 ? lag : 0L);

Check notice

Code scanning / CodeQL

Ignored error status of call Note

Method run ignores exceptional return value of CircularFifoQueue.offer.
lagStats.getPrefferedScalingMetric();
lagMetricsQueue.offer(lag > 0 ? lag : 0L);
} else {
lagMetricsQueue.offer(0L);

Check notice

Code scanning / CodeQL

Ignored error status of call Note

Method run ignores exceptional return value of CircularFifoQueue.offer.
Copy link
Contributor

@kfaraz kfaraz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@adithyachakilam , thanks for the changes! The implementation makes sense. I have just left some suggestions for field names and code structure.

Copy link
Contributor

@kfaraz kfaraz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Final comments, will approve once these are addressed.

Copy link
Contributor

@kfaraz kfaraz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for addressing the comments, @adithyachakilam .

@kfaraz
Copy link
Contributor

kfaraz commented Apr 29, 2024

Merging this PR as the pending tests are stuck due to the issue described in #16347 . The same tests have already passed on a different JDK.

@kfaraz kfaraz merged commit f8015eb into apache:master Apr 29, 2024
85 of 87 checks passed
@suneet-s
Copy link
Contributor

@adithyachakilam Can you please backport this change to the 30.0 branch as it looks like #16284 is in that branch

adithyachakilam added a commit to adithyachakilam/druid that referenced this pull request May 2, 2024
Changes:
- Add new config `lagAggregate` to `LagBasedAutoScalerConfig`
- Add field `aggregateForScaling` to `LagStats`
- Use the new field/config to determine which aggregate to use to compute lag
- Remove method `Supervisor.computeLagForAutoScaler()`
kfaraz pushed a commit that referenced this pull request May 3, 2024
Changes:
- Add new config `lagAggregate` to `LagBasedAutoScalerConfig`
- Add field `aggregateForScaling` to `LagStats`
- Use the new field/config to determine which aggregate to use to compute lag
- Remove method `Supervisor.computeLagForAutoScaler()`
@kfaraz kfaraz added this to the 31.0.0 milestone Oct 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants