-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kafka Scaler scaleToZeroOnInvalidOffset flag is only working for 'latest' offsetresetpolicy #4910
Comments
I think I'm also having this issue. I use Azure Container Apps which uses Keda version 2.10.0. No events were produced to the topic for 1 week, after which the number of replicas went up to the maximum value (3 in my case) - there are 6 partitions. After setting the offsetresetpolicy to latest, the replicas are scaled down to 0. This setting works fine for a while (maybe a couple of days), after which the number of replicas is always zero, and it does not scale up. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed due to inactivity. |
hi, |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
@dttung2905 wdyt? For me it sounds reasonable, are there any problems from Kafka side? |
Hi @jeevanragula,
If @zroubalik and @JorTurFer agree, I can issue a PR to fix this issue this week. |
Report
We have configured the offsetresetpolicy as "earliest" in our Kafka scaledobject.
Also, the scaleToZeroOnInvalidOffset as "true".
The pods are not scaled to Zero when we get consumeroffset as -1.
As per the code below in getLagForPartition function, this property is used only for offsetResetPolicy== latest.
Is that expected behavior?
If the offsetresetpolicy is earliest, If we get invalid offset for 1 partition that lag (-1) will be deducted in the total lag in below for loop.
Can't we use scaleToZeroOnInvalidOffset as true in this case?
Expected Behavior
I think the behavior needs to be documented clearly or changes needed in the logic.
Actual Behavior
The property is only working in some cases.
Steps to Reproduce the Problem
Configure Kafka scaler
Make sure there are no messages in Kafka for the retention period of 7 days (default value in kafka)
Configure below properties
offsetResetPolicy: earliest
scaleToZeroOnInvalidOffset: "true"
Logs from KEDA operator
No response
KEDA Version
2.11.2
Kubernetes Version
1.24
Platform
Any
Scaler Details
Kafka
Anything else?
No response
The text was updated successfully, but these errors were encountered: