Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integer overflow/UB when writing large buffers to TLS streams #38049

Open
zbjornson opened this issue Apr 2, 2021 · 1 comment
Open

Integer overflow/UB when writing large buffers to TLS streams #38049

zbjornson opened this issue Apr 2, 2021 · 1 comment
Labels
tls Issues and PRs related to the tls subsystem.

Comments

@zbjornson
Copy link
Contributor

zbjornson commented Apr 2, 2021

What steps will reproduce the bug?

const https = require("https");
  
const req = https.request({
        hostname: "httpbin.org",
        port: 443,
        path: "/post",
        method: "POST"
});
req.on("error", console.error);
req.on("response", r => console.log("response", r.statusCode))
req.write(Buffer.alloc(2147483616));
req.end();

How often does it reproduce? Is there a required condition?

Consistently.

What is the expected behavior?

Either a nice error saying that stream chunks can only be up to INT_MAX, or for the stream to automatically slice up large chunks.

edit the limit isn't INT_MAX actually ... in the above test case it's around 2,147,483,540 bytes, which I'm guessing is body + about 107 bytes of some overhead that sums to INT_MAX.

What do you see instead?

Error: write EPROTO 139931020609408:error:140D010F:SSL routines:SSL_write:bad length:../deps/openssl/openssl/ssl/ssl_lib.c:1962:

    at afterWriteDispatched (internal/stream_base_commons.js:156:25)
    at writevGeneric (internal/stream_base_commons.js:139:3)
    at TLSSocket.Socket._writeGeneric (net.js:783:11)
    at TLSSocket.connect (net.js:767:12)
    at Object.onceWrapper (events.js:421:28)
    at TLSSocket.emit (events.js:327:22)
    at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1132:10) {
  errno: -71,
  code: 'EPROTO',
  syscall: 'write'
}

Additional information

The error claims to be from the write syscall, but it looks to be in openssl upstream of the syscall:

int SSL_write(SSL *s, const void *buf, int num)
{
int ret;
size_t written;
if (num < 0) {
SSLerr(SSL_F_SSL_WRITE, SSL_R_BAD_LENGTH);

For comparison, an fs.WriteStream will properly write files exceeding Linux's write(2) limit of 2,147,479,552 bytes.

[pid  2112] write(17, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 2147483616) = 2147479552 <1.787290>
[pid  2112] write(17, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 4064) = 4064 <0.000060>

#27861 looks like it would make this possible to hit in other scenarios also, maybe there should be a limit on the concatenated segment size.

@zbjornson
Copy link
Contributor Author

zbjornson commented Apr 3, 2021

Here's half of a fix that switches from SSL_write to SSL_write_ex:

diff
diff --git a/src/crypto/crypto_tls.cc b/src/crypto/crypto_tls.cc
index 398509bc5c..0569aa725c 100644
--- a/src/crypto/crypto_tls.cc
+++ b/src/crypto/crypto_tls.cc
@@ -774,20 +774,21 @@ void TLSWrap::ClearOut() {
   MarkPopErrorOnReturn mark_pop_error_on_return;
 
   char out[kClearOutChunkSize];
-  int read;
+  size_t read;
+  int ok;
   for (;;) {
-    read = SSL_read(ssl_.get(), out, sizeof(out));
-    Debug(this, "Read %d bytes of cleartext output", read);
+    ok = SSL_read_ex(ssl_.get(), out, sizeof(out), &read);
+    Debug(this, "Read %zu bytes of cleartext output", ok ? read : 0);
 
-    if (read <= 0)
+    if (!ok)
       break;
 
     char* current = out;
     while (read > 0) {
-      int avail = read;
+      size_t avail = read;
 
       uv_buf_t buf = EmitAlloc(avail);
-      if (static_cast<int>(buf.len) < avail)
+      if (buf.len < avail)
         avail = buf.len;
       memcpy(buf.base, current, avail);
       EmitRead(avail, buf);
@@ -811,14 +812,11 @@ void TLSWrap::ClearOut() {
     EmitRead(UV_EOF);
   }
 
-  // We need to check whether an error occurred or the connection was
-  // shutdown cleanly (SSL_ERROR_ZERO_RETURN) even when read == 0.
-  // See node#1642 and SSL_read(3SSL) for details.
-  if (read <= 0) {
+  if (!ok) {
     HandleScope handle_scope(env()->isolate());
     int err;
 
-    Local<Value> arg = GetSSLError(read, &err, nullptr)
+    Local<Value> arg = GetSSLError(ok, &err, nullptr)
         .FromMaybe(Local<Value>());
 
     // Ignore ZERO_RETURN after EOF, it is basically not a error
@@ -859,12 +857,13 @@ void TLSWrap::ClearIn() {
   MarkPopErrorOnReturn mark_pop_error_on_return;
 
   NodeBIO::FromBIO(enc_out_)->set_allocate_tls_hint(data.size());
-  int written = SSL_write(ssl_.get(), data.data(), data.size());
-  Debug(this, "Writing %zu bytes, written = %d", data.size(), written);
-  CHECK(written == -1 || written == static_cast<int>(data.size()));
+  size_t written;
+  int ok = SSL_write_ex(ssl_.get(), data.data(), data.size(), &written);
+  Debug(this, "Writing %zu bytes, written = %zu", data.size(), ok ? written : 0);
+  CHECK(!ok || written == data.size());
 
   // All written
-  if (written != -1) {
+  if (ok) {
     Debug(this, "Successfully wrote all data to SSL");
     return;
   }
@@ -875,7 +874,7 @@ void TLSWrap::ClearIn() {
 
   int err;
   std::string error_str;
-  MaybeLocal<Value> arg = GetSSLError(written, &err, &error_str);
+  MaybeLocal<Value> arg = GetSSLError(ok, &err, &error_str);
   if (!arg.IsEmpty()) {
     Debug(this, "Got SSL error (%d)", err);
     write_callback_scheduled_ = true;
@@ -1008,7 +1007,8 @@ int TLSWrap::DoWrite(WriteWrap* w,
   AllocatedBuffer data;
   MarkPopErrorOnReturn mark_pop_error_on_return;
 
-  int written = 0;
+  size_t written;
+  int ok;
 
   // It is common for zero length buffers to be written,
   // don't copy data if there there is one buffer with data
@@ -1027,25 +1027,25 @@ int TLSWrap::DoWrite(WriteWrap* w,
     }
 
     NodeBIO::FromBIO(enc_out_)->set_allocate_tls_hint(length);
-    written = SSL_write(ssl_.get(), data.data(), length);
+    ok = SSL_write_ex(ssl_.get(), data.data(), length, &written);
   } else {
     // Only one buffer: try to write directly, only store if it fails
     uv_buf_t* buf = &bufs[nonempty_i];
     NodeBIO::FromBIO(enc_out_)->set_allocate_tls_hint(buf->len);
-    written = SSL_write(ssl_.get(), buf->base, buf->len);
+    ok = SSL_write_ex(ssl_.get(), buf->base, buf->len, &written);
 
-    if (written == -1) {
+    if (!ok) {
       data = AllocatedBuffer::AllocateManaged(env(), length);
       memcpy(data.data(), buf->base, buf->len);
     }
   }
 
-  CHECK(written == -1 || written == static_cast<int>(length));
-  Debug(this, "Writing %zu bytes, written = %d", length, written);
+  CHECK(!ok || written == length);
+  Debug(this, "Writing %zu bytes, written = %zu", length, ok ? written : 0);
 
-  if (written == -1) {
+  if (!ok) {
     int err;
-    MaybeLocal<Value> arg = GetSSLError(written, &err, &error_);
+    MaybeLocal<Value> arg = GetSSLError(ok, &err, &error_);
 
     // If we stopped writing because of an error, it's fatal, discard the data.
     if (!arg.IsEmpty()) {

However, uv_try_write returns an int that is the number of bytes written (>0) or the error (<0), so there's overflow again there.

diff --git a/src/stream_wrap.cc b/src/stream_wrap.cc
index 78d20f912b..42de5f8831 100644
--- a/src/stream_wrap.cc
+++ b/src/stream_wrap.cc
@@ -342,6 +342,7 @@ int LibuvStreamWrap::DoTryWrite(uv_buf_t** bufs, size_t* count) {
   uv_buf_t* vbufs = *bufs;
   size_t vcount = *count;
 
+  // FIXME err is an int but the buffers can be larger than INT_MAX.
   err = uv_try_write(stream(), vbufs, vcount);
   if (err == UV_ENOSYS || err == UV_EAGAIN)
     return 0;

Fixes I can think of:

  1. Change the return type of uv_try_write to long int or ssize_t. The unsigned->signed conversion behavior here is implementation-defined: https://github.com/libuv/libuv/blob/285a5ea819035ff777b8b7c6a367f3f5b55d8809/src/unix/stream.c#L1551

    I'm struggling to find info on how big the buffer passed to sendmsg(2) or WSASend for a TCP socket can practically be -- I don't know if there's another limit I'd run into after making this change.

  2. Slice the buffers into ~1GB chunks, maybe in LibuvStreamWrap::DoTryWrite() or in TLSWrap::EncOut() -- sort of a refinement to tls: group chunks into TLS segments #27861.

  3. Make no changes to Node.js and do the slicing in my code instead. (Will do regardless, at least temporarily.)

@Ayase-252 Ayase-252 added the tls Issues and PRs related to the tls subsystem. label Apr 3, 2021
@zbjornson zbjornson changed the title Integer overflow/UB when writing large buffers to SSL streams Integer overflow/UB when writing large buffers to TLS streams Apr 3, 2021
kekrby added a commit to t2linux/nixos-t2-iso that referenced this issue Aug 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
tls Issues and PRs related to the tls subsystem.
Projects
None yet
Development

No branches or pull requests

2 participants