You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When uploading a large-ish file via an AWS S3 Node, a memory spike is observed. For a 65MB file, a memory usage of 3750MB was observed.
To Reproduce
Steps to reproduce the behavior:
Create a workflow with an AWS S3 Upload Node
Upload a large-ish file (100MB)
Observe memory usage spike during upload
Expected behavior
Ideally, Object.keys(...) should not be run on large files, thus removing the root cause of the memory spike (see below).
Environment:
OS: All
n8n Version Latest
Node.js Version All
Database system SQlite
Operation mode own
Additional context
The Aws.credentials.ts implementation performs the following check while constructing the signed headers for a particular request:
if (body && Object.keys(body).length === 0) {
body = '';
}
This was added in #4107 as a response to another issue. Unfortunately, when uploading a file using the S3 node, the body sent to the authenticate method includes the full file to be uploaded. This results in the Object.keys(...) function being called with a full buffer of file data.
It appears that Object.keys handles this poorly. When receiving a Buffer, it attempts to traverse recursively for all keys, returning the full buffer as a key set. For a 60MB test file, this used about 3750MB of memory. This grows exponentially with the file size.
Would it be possible to add an extra conditional to check for body instanceof Buffer to allow a quick-skip over large files? Alternatively, we could simply remove this block of code completely - though it's there for a reason so I'm sure this isn't the way to go.
Thanks so much for your constant assistance!
The text was updated successfully, but these errors were encountered:
Describe the bug
When uploading a large-ish file via an AWS S3 Node, a memory spike is observed. For a 65MB file, a memory usage of 3750MB was observed.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Ideally,
Object.keys(...)
should not be run on large files, thus removing the root cause of the memory spike (see below).Environment:
Additional context
The
Aws.credentials.ts
implementation performs the following check while constructing the signed headers for a particular request:This was added in #4107 as a response to another issue. Unfortunately, when uploading a file using the S3 node, the
body
sent to theauthenticate
method includes the full file to be uploaded. This results in theObject.keys(...)
function being called with a full buffer of file data.It appears that
Object.keys
handles this poorly. When receiving a Buffer, it attempts to traverse recursively for all keys, returning the full buffer as a key set. For a 60MB test file, this used about 3750MB of memory. This grows exponentially with the file size.Would it be possible to add an extra conditional to check for
body instanceof Buffer
to allow a quick-skip over large files? Alternatively, we could simply remove this block of code completely - though it's there for a reason so I'm sure this isn't the way to go.Thanks so much for your constant assistance!
The text was updated successfully, but these errors were encountered: