Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kafka 17792 header parsing times out processing and using large quantities of memory if the string looks like a number #17510

Open
wants to merge 2 commits into
base: trunk
Choose a base branch
from

Conversation

msillence
Copy link

Jira KAFKA-17792

We have trace headers such as:

"X-B3-SpanId": "74320e6e26adc8f8"

if however the value happens to be: "407127e212797209"

This is then treated as a numeric value and it tries to convert this as a numeric representation and an exact value using BigDecimal
Specifically the calls to setScale with floor and ceiling seem to cause the problem

testing this the behaviour of biginteger and setScale calls shows there are already limits on the number range possible sadly my particular number falls in the range of about 25 minutes and 4 gig of memory rather than quick failure

1e+1 time 0.0 totalMemory 532676608
1e+10 time 0.0 totalMemory 532676608
1e+100 time 0.0 totalMemory 532676608
1e+1000 time 0.0 totalMemory 532676608
1e+10000 time 0.005 totalMemory 532676608
1e+100000 time 0.035 totalMemory 532676608
1e+1000000 time 0.228 totalMemory 532676608
1e+10000000 time 4.308 totalMemory 926941184
1e+100000000 time 117.119 totalMemory 3221225472
1e+1000000000 time 0.0 totalMemory 3221225472 BigInteger would overflow supported range
1e+10000000000 time 0.001 totalMemory 3221225472 Too many nonzero exponent digits.
1e+100000000000 time 0.0 totalMemory 3221225472 Too many nonzero exponent digits.

Committer Checklist (excluded from commit message)

  • Verify design and implementation
  • Verify test coverage and CI build status
  • Verify documentation (including upgrade notes)

@github-actions github-actions bot added streams core Kafka Broker tools connect build Gradle build or GitHub Actions clients labels Oct 15, 2024
@mimaison
Copy link
Member

Thanks for the PR. Please make sure it only contains changes related to the issue this is addressing. Currently I see a lot of other changes including some non-Apache Kafka files.

@msillence
Copy link
Author

Thanks for the PR. Please make sure it only contains changes related to the issue this is addressing. Currently I see a lot of other changes including some non-Apache Kafka files.

I'm not quite sure what I've done wrong, I cloned the repo and applied my change again and forced pushed it to my fork and it still looks wrong

The change is here msillence@3720951

@msillence
Copy link
Author

Oh now I see the problem I've mixed up repos this change is against github.com:confluentinc/kafka

@github-actions github-actions bot added the small Small PRs label Oct 16, 2024
@msillence
Copy link
Author

actually there is a simpler solution https://stackoverflow.com/a/12748321

…ities of memory if the string looks like a number
@msillence
Copy link
Author

I feel like the correct fix is the stackoverflow test:
bd.signum() == 0 || bd.scale() <= 0 || bd.stripTrailingZeros().scale() <= 0
sadly there are quite a few tests that expect the perhaps incorrect behavior where Float.MAX_VALUE which is 3.4028235E38 looks like a integer to me but the ceil/floor calls failed due to the number of zeros so it fell though to another part of the parsing and returned a float.
To further confound the issue the very small numbers such as 1e-100000000 will with the new integer test fall though to the float conversion which only checks the upper range and will simply convert to zero as float doesn't have the precision. Fixing that breaks yet more tests.

@mimaison mimaison requested a review from gharris1727 October 22, 2024 13:01
Copy link
Contributor

@gharris1727 gharris1727 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@msillence Thanks for the PR!

I did not know that setScale with very large numbers was so costly. I see that the old logic (before #15469) used RoundingMode.ROUND_UNNECESSARY, which probably had some guards in place to avoid consuming resources for an impossible operation.

sadly there are quite a few tests that expect the perhaps incorrect behavior where Float.MAX_VALUE which is 3.4028235E38 looks like a integer to me but the ceil/floor calls failed due to the number of zeros so it fell though to another part of the parsing and returned a float.

That number is indeed an integer, but it's too large to fit in a LONG64, which has ~19 decimals of precision. The tests are correct that this should be parsed as a FLOAT32.

While I understand that having mostly STRINGs with an occasional number-look-alike makes it natural to suggest changing this into a STRING, I don't think that's backwards compatible or safe. Another user could currently have a flow which parses all Decimals, and the current implementation would have some of their decimals fall back to STRING.

To avoid accidental Decimal parsing, I would recommend either an alternative HeaderConverter implementation, or avoiding using IDs that can be confused for numbers, such as with an alphabetic prefix. Even with this patch, you will still get some accidental Decimal, Float, or Double parsing when the e lands in some part of the value.

@@ -1041,6 +1043,10 @@ private static SchemaAndValue parseAsNumber(String token) {
}

private static SchemaAndValue parseAsExactDecimal(BigDecimal decimal) {
BigDecimal abs = decimal.abs();
if (abs.compareTo(TOO_BIG) > 0 || (abs.compareTo(TOO_SMALL) < 0 && BigDecimal.ZERO.compareTo(abs) != 0)) {
throw new NumberFormatException("outside efficient parsing range");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At this stage in parsing, we've successfully parsed a BigDecimal. This method tries to losslessly cast the BigDecimal to a primitive integer type. If the cast would be lossy, it returns null and lets the caller figure out whether it's a float32, float64, or has to be an arbitrary precision decimal.

Rather than throwing NumberFormatException and making this a STRING, we should return null, and let the other fallback logic happen, so this shows up as an arbitrary precision Decimal.

I also suspect that the comparison could be a little tighter in that case: TOO_BIG could be LONG_MAX, and TOO_SMALL could be ONE, eliminating calling setScale for 1e20 which cannot possibly fit in a LONG.

@mjsax mjsax removed streams core Kafka Broker tools build Gradle build or GitHub Actions clients labels Nov 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants