-
-
Notifications
You must be signed in to change notification settings - Fork 30.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CVE-2020-10735: Prevent DoS by large int<->str conversions #95778
Comments
Integer to and from text conversions via CPython's bignum `int` type is not safe against denial of service attacks due to malicious input. Very large input strings with hundred thousands of digits can consume several CPU seconds. This PR comes fresh from a pile of work done in our private PSRT security response team repo. Signed-off-by: Christian Heimes [Red Hat] <[email protected]> Tons-of-polishing-up-by: Gregory P. Smith [Google] <[email protected]> Reviews via the private PSRT repo via many others (see the NEWS entry in the PR). <!-- gh-issue-number: gh-95778 --> * Issue: gh-95778 <!-- /gh-issue-number --> I wrote up [a one pager for the release managers](https://docs.google.com/document/d/1KjuF_aXlzPUxTK4BMgezGJ2Pn7uevfX7g0_mvgHlL7Y/edit#). Much of that text wound up in the Issue. Backports PRs already exist. See the issue for links.
) Integer to and from text conversions via CPython's bignum `int` type is not safe against denial of service attacks due to malicious input. Very large input strings with hundred thousands of digits can consume several CPU seconds. This PR comes fresh from a pile of work done in our private PSRT security response team repo. This backports #96499 aka 511ca94 Signed-off-by: Christian Heimes [Red Hat] <[email protected]> Tons-of-polishing-up-by: Gregory P. Smith [Google] <[email protected]> Reviews via the private PSRT repo via many others (see the NEWS entry in the PR). <!-- gh-issue-number: gh-95778 --> * Issue: gh-95778 <!-- /gh-issue-number --> I wrote up [a one pager for the release managers](https://docs.google.com/document/d/1KjuF_aXlzPUxTK4BMgezGJ2Pn7uevfX7g0_mvgHlL7Y/edit#).
) Integer to and from text conversions via CPython's bignum `int` type is not safe against denial of service attacks due to malicious input. Very large input strings with hundred thousands of digits can consume several CPU seconds. This PR comes fresh from a pile of work done in our private PSRT security response team repo. This backports #96499 aka 511ca94 Signed-off-by: Christian Heimes [Red Hat] <[email protected]> Tons-of-polishing-up-by: Gregory P. Smith [Google] <[email protected]> Reviews via the private PSRT repo via many others (see the NEWS entry in the PR). <!-- gh-issue-number: gh-95778 --> * Issue: gh-95778 <!-- /gh-issue-number --> I wrote up [a one pager for the release managers](https://docs.google.com/document/d/1KjuF_aXlzPUxTK4BMgezGJ2Pn7uevfX7g0_mvgHlL7Y/edit#).
Thanks a lot for doing this, and especially for the backports to older versions. It's really nice I don't have to figure them all out myself ;-). |
As commented on the PR, the current code in main doesn't prevent the potential DOS for the Python 3.12.0a0 (heads/main-dirty:ac4ddab405, Sep 3 2022, 16:35:07) [Clang 13.1.6 (clang-1316.0.21.2.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> n = 10**(10**7) # takes a few seconds to run
>>> s = str(n) # expect instant ValueError, but takes many minutes One possible fix would be to keep the existing post-loop check (so that we get the benefit of knowing exactly what the threshold is), but also adding a cruder pre-check that would catch the majority of bad cases before the loop is entered. I think we want something like the following as a pre-check if (size_a >= 10 * _PY_LONG_MAX_STR_DIGITS_THRESHOLD / (3 * PyLong_SHIFT) + 2) {
PyInterpreterState *interp = _PyInterpreterState_GET();
int max_str_digits = interp->int_max_str_digits;
if ((max_str_digits > 0) && (size_a >= 10 * max_str_digits / (3 * PyLong_SHIFT) + 2)) {
<do error raising here>
}
} Here For the logic used in computing the constant: 10/3 is an upper bound for log10(2), so
and similar logic applies for the One complication: we can no longer produce an error message that says exactly how many decimal digits there were, but I think that's going to be true no matter how we fix this. |
I'll see if I can turn the above into a PR. |
BTW, did the PR authors consider using type |
-X options and envvars are documented in 4 or 5 spots, I think a couple were missed by the PR. |
I did consider |
…GH-96537) Converting a large enough `int` to a decimal string raises `ValueError` as expected. However, the raise comes _after_ the quadratic-time base-conversion algorithm has run to completion. For effective DOS prevention, we need some kind of check before entering the quadratic-time loop. Oops! =) The quick fix: essentially we catch _most_ values that exceed the threshold up front. Those that slip through will still be on the small side (read: sufficiently fast), and will get caught by the existing check so that the limit remains exact. The justification for the current check. The C code check is: ```c max_str_digits / (3 * PyLong_SHIFT) <= (size_a - 11) / 10 ``` In GitHub markdown math-speak, writing $M$ for `max_str_digits`, $L$ for `PyLong_SHIFT` and $s$ for `size_a`, that check is: $$\left\lfloor\frac{M}{3L}\right\rfloor \le \left\lfloor\frac{s - 11}{10}\right\rfloor$$ From this it follows that $$\frac{M}{3L} < \frac{s-1}{10}$$ hence that $$\frac{L(s-1)}{M} > \frac{10}{3} > \log_2(10).$$ So $$2^{L(s-1)} > 10^M.$$ But our input integer $a$ satisfies $|a| \ge 2^{L(s-1)}$, so $|a|$ is larger than $10^M$. This shows that we don't accidentally capture anything _below_ the intended limit in the check. <!-- gh-issue-number: pythongh-95778 --> * Issue: pythongh-95778 <!-- /gh-issue-number --> Co-authored-by: Gregory P. Smith [Google LLC] <[email protected]> (cherry picked from commit b126196) Co-authored-by: Mark Dickinson <[email protected]>
…ythonGH-96537) Converting a large enough `int` to a decimal string raises `ValueError` as expected. However, the raise comes _after_ the quadratic-time base-conversion algorithm has run to completion. For effective DOS prevention, we need some kind of check before entering the quadratic-time loop. Oops! =) The quick fix: essentially we catch _most_ values that exceed the threshold up front. Those that slip through will still be on the small side (read: sufficiently fast), and will get caught by the existing check so that the limit remains exact. The justification for the current check. The C code check is: ```c max_str_digits / (3 * PyLong_SHIFT) <= (size_a - 11) / 10 ``` In GitHub markdown math-speak, writing $M$ for `max_str_digits`, $L$ for `PyLong_SHIFT` and $s$ for `size_a`, that check is: $$\left\lfloor\frac{M}{3L}\right\rfloor \le \left\lfloor\frac{s - 11}{10}\right\rfloor$$ From this it follows that $$\frac{M}{3L} < \frac{s-1}{10}$$ hence that $$\frac{L(s-1)}{M} > \frac{10}{3} > \log_2(10).$$ So $$2^{L(s-1)} > 10^M.$$ But our input integer $a$ satisfies $|a| \ge 2^{L(s-1)}$, so $|a|$ is larger than $10^M$. This shows that we don't accidentally capture anything _below_ the intended limit in the check. <!-- gh-issue-number: pythongh-95778 --> * Issue: pythongh-95778 <!-- /gh-issue-number --> Co-authored-by: Gregory P. Smith [Google LLC] <[email protected]> (cherry picked from commit b126196) Co-authored-by: Mark Dickinson <[email protected]>
…#96537) Converting a large enough `int` to a decimal string raises `ValueError` as expected. However, the raise comes _after_ the quadratic-time base-conversion algorithm has run to completion. For effective DOS prevention, we need some kind of check before entering the quadratic-time loop. Oops! =) The quick fix: essentially we catch _most_ values that exceed the threshold up front. Those that slip through will still be on the small side (read: sufficiently fast), and will get caught by the existing check so that the limit remains exact. The justification for the current check. The C code check is: ```c max_str_digits / (3 * PyLong_SHIFT) <= (size_a - 11) / 10 ``` In GitHub markdown math-speak, writing $M$ for `max_str_digits`, $L$ for `PyLong_SHIFT` and $s$ for `size_a`, that check is: $$\left\lfloor\frac{M}{3L}\right\rfloor \le \left\lfloor\frac{s - 11}{10}\right\rfloor$$ From this it follows that $$\frac{M}{3L} < \frac{s-1}{10}$$ hence that $$\frac{L(s-1)}{M} > \frac{10}{3} > \log_2(10).$$ So $$2^{L(s-1)} > 10^M.$$ But our input integer $a$ satisfies $|a| \ge 2^{L(s-1)}$, so $|a|$ is larger than $10^M$. This shows that we don't accidentally capture anything _below_ the intended limit in the check. <!-- gh-issue-number: pythongh-95778 --> * Issue: pythongh-95778 <!-- /gh-issue-number --> Co-authored-by: Gregory P. Smith [Google LLC] <[email protected]>
Converting a large enough `int` to a decimal string raises `ValueError` as expected. However, the raise comes _after_ the quadratic-time base-conversion algorithm has run to completion. For effective DOS prevention, we need some kind of check before entering the quadratic-time loop. Oops! =) The quick fix: essentially we catch _most_ values that exceed the threshold up front. Those that slip through will still be on the small side (read: sufficiently fast), and will get caught by the existing check so that the limit remains exact. The justification for the current check. The C code check is: ```c max_str_digits / (3 * PyLong_SHIFT) <= (size_a - 11) / 10 ``` In GitHub markdown math-speak, writing $M$ for `max_str_digits`, $L$ for `PyLong_SHIFT` and $s$ for `size_a`, that check is: $$\left\lfloor\frac{M}{3L}\right\rfloor \le \left\lfloor\frac{s - 11}{10}\right\rfloor$$ From this it follows that $$\frac{M}{3L} < \frac{s-1}{10}$$ hence that $$\frac{L(s-1)}{M} > \frac{10}{3} > \log_2(10).$$ So $$2^{L(s-1)} > 10^M.$$ But our input integer $a$ satisfies $|a| \ge 2^{L(s-1)}$, so $|a|$ is larger than $10^M$. This shows that we don't accidentally capture anything _below_ the intended limit in the check. <!-- gh-issue-number: gh-95778 --> * Issue: gh-95778 <!-- /gh-issue-number --> Co-authored-by: Gregory P. Smith [Google LLC] <[email protected]> (cherry picked from commit b126196) Co-authored-by: Mark Dickinson <[email protected]>
…#96537) Converting a large enough `int` to a decimal string raises `ValueError` as expected. However, the raise comes _after_ the quadratic-time base-conversion algorithm has run to completion. For effective DOS prevention, we need some kind of check before entering the quadratic-time loop. Oops! =) The quick fix: essentially we catch _most_ values that exceed the threshold up front. Those that slip through will still be on the small side (read: sufficiently fast), and will get caught by the existing check so that the limit remains exact. The justification for the current check. The C code check is: ```c max_str_digits / (3 * PyLong_SHIFT) <= (size_a - 11) / 10 ``` In GitHub markdown math-speak, writing $M$ for `max_str_digits`, $L$ for `PyLong_SHIFT` and $s$ for `size_a`, that check is: $$\left\lfloor\frac{M}{3L}\right\rfloor \le \left\lfloor\frac{s - 11}{10}\right\rfloor$$ From this it follows that $$\frac{M}{3L} < \frac{s-1}{10}$$ hence that $$\frac{L(s-1)}{M} > \frac{10}{3} > \log_2(10).$$ So $$2^{L(s-1)} > 10^M.$$ But our input integer $a$ satisfies $|a| \ge 2^{L(s-1)}$, so $|a|$ is larger than $10^M$. This shows that we don't accidentally capture anything _below_ the intended limit in the check. <!-- gh-issue-number: pythongh-95778 --> * Issue: pythongh-95778 <!-- /gh-issue-number --> Co-authored-by: Gregory P. Smith [Google LLC] <[email protected]>
…#96537) Converting a large enough `int` to a decimal string raises `ValueError` as expected. However, the raise comes _after_ the quadratic-time base-conversion algorithm has run to completion. For effective DOS prevention, we need some kind of check before entering the quadratic-time loop. Oops! =) The quick fix: essentially we catch _most_ values that exceed the threshold up front. Those that slip through will still be on the small side (read: sufficiently fast), and will get caught by the existing check so that the limit remains exact. The justification for the current check. The C code check is: ```c max_str_digits / (3 * PyLong_SHIFT) <= (size_a - 11) / 10 ``` In GitHub markdown math-speak, writing $M$ for `max_str_digits`, $L$ for `PyLong_SHIFT` and $s$ for `size_a`, that check is: $$\left\lfloor\frac{M}{3L}\right\rfloor \le \left\lfloor\frac{s - 11}{10}\right\rfloor$$ From this it follows that $$\frac{M}{3L} < \frac{s-1}{10}$$ hence that $$\frac{L(s-1)}{M} > \frac{10}{3} > \log_2(10).$$ So $$2^{L(s-1)} > 10^M.$$ But our input integer $a$ satisfies $|a| \ge 2^{L(s-1)}$, so $|a|$ is larger than $10^M$. This shows that we don't accidentally capture anything _below_ the intended limit in the check. <!-- gh-issue-number: pythongh-95778 --> * Issue: pythongh-95778 <!-- /gh-issue-number --> Co-authored-by: Gregory P. Smith [Google LLC] <[email protected]>
) (#96563) Converting a large enough `int` to a decimal string raises `ValueError` as expected. However, the raise comes _after_ the quadratic-time base-conversion algorithm has run to completion. For effective DOS prevention, we need some kind of check before entering the quadratic-time loop. Oops! =) The quick fix: essentially we catch _most_ values that exceed the threshold up front. Those that slip through will still be on the small side (read: sufficiently fast), and will get caught by the existing check so that the limit remains exact. The justification for the current check. The C code check is: ```c max_str_digits / (3 * PyLong_SHIFT) <= (size_a - 11) / 10 ``` In GitHub markdown math-speak, writing $M$ for `max_str_digits`, $L$ for `PyLong_SHIFT` and $s$ for `size_a`, that check is: $$\left\lfloor\frac{M}{3L}\right\rfloor \le \left\lfloor\frac{s - 11}{10}\right\rfloor$$ From this it follows that $$\frac{M}{3L} < \frac{s-1}{10}$$ hence that $$\frac{L(s-1)}{M} > \frac{10}{3} > \log_2(10).$$ So $$2^{L(s-1)} > 10^M.$$ But our input integer $a$ satisfies $|a| \ge 2^{L(s-1)}$, so $|a|$ is larger than $10^M$. This shows that we don't accidentally capture anything _below_ the intended limit in the check. <!-- gh-issue-number: gh-95778 --> * Issue: gh-95778 <!-- /gh-issue-number --> Co-authored-by: Gregory P. Smith [Google LLC] <[email protected]> (cherry picked from commit b126196) Co-authored-by: Mark Dickinson <[email protected]>
* Correctly pre-check for int-to-str conversion (#96537) Converting a large enough `int` to a decimal string raises `ValueError` as expected. However, the raise comes _after_ the quadratic-time base-conversion algorithm has run to completion. For effective DOS prevention, we need some kind of check before entering the quadratic-time loop. Oops! =) The quick fix: essentially we catch _most_ values that exceed the threshold up front. Those that slip through will still be on the small side (read: sufficiently fast), and will get caught by the existing check so that the limit remains exact. The justification for the current check. The C code check is: ```c max_str_digits / (3 * PyLong_SHIFT) <= (size_a - 11) / 10 ``` In GitHub markdown math-speak, writing $M$ for `max_str_digits`, $L$ for `PyLong_SHIFT` and $s$ for `size_a`, that check is: $$\left\lfloor\frac{M}{3L}\right\rfloor \le \left\lfloor\frac{s - 11}{10}\right\rfloor$$ From this it follows that $$\frac{M}{3L} < \frac{s-1}{10}$$ hence that $$\frac{L(s-1)}{M} > \frac{10}{3} > \log_2(10).$$ So $$2^{L(s-1)} > 10^M.$$ But our input integer $a$ satisfies $|a| \ge 2^{L(s-1)}$, so $|a|$ is larger than $10^M$. This shows that we don't accidentally capture anything _below_ the intended limit in the check. <!-- gh-issue-number: gh-95778 --> * Issue: gh-95778 <!-- /gh-issue-number --> Co-authored-by: Gregory P. Smith [Google LLC] <[email protected]> Co-authored-by: Christian Heimes <[email protected]> Co-authored-by: Mark Dickinson <[email protected]>
Are you aware that there are algorithms to do conversions faster than quadratic time? For example, one may use divide and conquer + FFT to achieve something around O(n log^2 n) in complexity. Not as great as linear, but certainly much better than quadratic. |
Yes, using a better algorithm is covered over in #90716. |
Removing the limit might not be feasible as after this feature some code could come to depend on its existence indirectly to protect other things in whole systems from large values, knowingly or not. That doesn't mean we couldn't, just that it'd need consideration. We also need to consider the worst case performance of any fancier algorithm before removing the ability to limit it. Untrusted data seeking a DoS would target that. Ex: If a faster conversion algorithm is typically A notebook could even be designed to avoid half the problem by having their REPL repr and storage code auto-switch to hex for large values. |
* Correctly pre-check for int-to-str conversion Converting a large enough `int` to a decimal string raises `ValueError` as expected. However, the raise comes _after_ the quadratic-time base-conversion algorithm has run to completion. For effective DOS prevention, we need some kind of check before entering the quadratic-time loop. Oops! =) The quick fix: essentially we catch _most_ values that exceed the threshold up front. Those that slip through will still be on the small side (read: sufficiently fast), and will get caught by the existing check so that the limit remains exact. The justification for the current check. The C code check is: ```c max_str_digits / (3 * PyLong_SHIFT) <= (size_a - 11) / 10 ``` In GitHub markdown math-speak, writing $M$ for `max_str_digits`, $L$ for `PyLong_SHIFT` and $s$ for `size_a`, that check is: $$\left\lfloor\frac{M}{3L}\right\rfloor \le \left\lfloor\frac{s - 11}{10}\right\rfloor$$ From this it follows that $$\frac{M}{3L} < \frac{s-1}{10}$$ hence that $$\frac{L(s-1)}{M} > \frac{10}{3} > \log_2(10).$$ So $$2^{L(s-1)} > 10^M.$$ But our input integer $a$ satisfies $|a| \ge 2^{L(s-1)}$, so $|a|$ is larger than $10^M$. This shows that we don't accidentally capture anything _below_ the intended limit in the check. <!-- gh-issue-number: gh-95778 --> * Issue: gh-95778 <!-- /gh-issue-number --> Co-authored-by: Gregory P. Smith [Google LLC] <[email protected]> Co-authored-by: Christian Heimes <[email protected]> Co-authored-by: Mark Dickinson <[email protected]>
This is appalling. Situation: people pass untrusted data to slow built-in functions. Reasonable solution: a) patch vulnerable programs, b) optimize said functions. CPython's solution: just limit stuff. Your rationale is founded on the fact that developers constantly add bugs and fixing them is not an option. Fixing bugs is like, more than half of what developers normally do. When ReDoS was discovered, did we limit the number of iterations of the matching algorithm? No. Then why should we do the same here? There are dangerous tools, but they are often much more useful due to lack of artificial limits. When did we drop "Special cases aren't special enough to break the rules." To add insult to injury, people seem to be talking about limiting You're limiting integer to string conversions. How soon are you going to limit Why is And then, has anyone actually researched how much trusted code parses integers from untrusted sources directly? To me it seems that even ReDoS should be more popular than this. I think people seldom call I don't think I even understand why a limit is added in the first place. If Python used a quadratic algorithm for multiplication instead of FFT, that would not be considered a vulnerability but an inefficiency. There are much faster algorithms than the one used by CPython, I literally found one by googling "subquadratic base conversion". There: Richard P. Brent and Paul Zimmermann, Modern Computer Arithmetic. #90716, the issue that was tracking this, was dropped because what, developers tried to invent stuff themselves instead of comparing with state of art, and just chose the simplest option after an inevitable failure? |
As a reminder to everybody the Python Community Code Of Conduct applies here. Closing. This is fixed. We'll open new issues for any follow up work necessary. |
If applications don't validate their data before running str <-> int conversions, I'm not entirely sure that having an exploit that allows attackers to crash the application with an unexpected ValueError instead of merely slowing it down for a few seconds is a great improvement. I'm only a naive developer but this kinda feels like the fix is worse than the problem its solving. What I am missing? |
…6504) Converting between `int` and `str` in bases other than 2 (binary), 4, 8 (octal), 16 (hexadecimal), or 32 such as base 10 (decimal) now raises a `ValueError` if the number of digits in string form is above a limit to avoid potential denial of service attacks due to the algorithmic complexity. This is a mitigation for CVE-2020-10735 (https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10735). This new limit can be configured or disabled by environment variable, command line flag, or :mod:`sys` APIs. See the `Integer String Conversion Length Limitation` documentation. The default limit is 4300 digits in string form. Patch by Gregory P. Smith [Google] and Christian Heimes [Red Hat] with feedback from Victor Stinner, Thomas Wouters, Steve Dower, Ned Deily, and Mark Dickinson.
Please redirect further discussion to discuss.python.org. |
Further Discussion... is taking place in discuss.python.org threads |
There is a minor bug when parsing the -X option: #96848 |
I propose PR #96874 to mention sys.set_int_max_str_digits() in the error message. |
Follow-up PR for a couple spots missed for documentation: #100627 |
Problem
A Denial Of Service (DoS) issue was identified in CPython because we use binary bignum’s for our
int
implementation. A huge integer will always consume a near-quadratic amount of CPU time in conversion to or from a base 10 (decimal) string with a large number of digits. No efficient algorithm exists to do otherwise.It is quite common for Python code implementing network protocols and data serialization to do
int(untrusted_string_or_bytes_value)
on input to get a numeric value, without having limited the input length or to dolog("processing thing id %s", unknowingly_huge_integer)
or any similar concept to convert anint
to a string without first checking its magnitude. (http
,json
,xmlrpc
,logging
, loading large values into integer via linear-time conversions such as hexadecimal stored inyaml
, or anything computing larger values based on user controlled inputs… which then wind up attempting to output as decimal later on). All of these can suffer a CPU consuming DoS in the face of untrusted data.Everyone auditing all existing code for this, adding length guards, and maintaining that practice everywhere is not feasible nor is it what we deem the vast majority of our users want to do.
This issue has been reported to the Python Security Response Team multiple times by a few different people since early 2020, most recently a few weeks ago while I was in the middle of polishing up the PR so it’d be ready before 3.11.0rc2.
Mitigation
After discussion on the Python Security Response Team mailing list the conclusion was that we needed to limit the size of integer to string conversions for non-linear time conversions (anything not a power-of-2 base) by default. And offer the ability to configure or disable this limit.
The Python Steering Council is aware of this change and accepts it as necessary.
Linked PRs
The text was updated successfully, but these errors were encountered: