Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS S3 - Performance Impact on Large Files #6118

Closed
rhyswilliamsza opened this issue Apr 27, 2023 · 0 comments · Fixed by #6131
Closed

AWS S3 - Performance Impact on Large Files #6118

rhyswilliamsza opened this issue Apr 27, 2023 · 0 comments · Fixed by #6131

Comments

@rhyswilliamsza
Copy link
Contributor

Describe the bug
When uploading a large-ish file via an AWS S3 Node, a memory spike is observed. For a 65MB file, a memory usage of 3750MB was observed.

To Reproduce
Steps to reproduce the behavior:

  1. Create a workflow with an AWS S3 Upload Node
  2. Upload a large-ish file (100MB)
  3. Observe memory usage spike during upload

Expected behavior
Ideally, Object.keys(...) should not be run on large files, thus removing the root cause of the memory spike (see below).

Environment:

  • OS: All
  • n8n Version Latest
  • Node.js Version All
  • Database system SQlite
  • Operation mode own

Additional context
The Aws.credentials.ts implementation performs the following check while constructing the signed headers for a particular request:

if (body && Object.keys(body).length === 0) {
	body = '';
}

This was added in #4107 as a response to another issue. Unfortunately, when uploading a file using the S3 node, the body sent to the authenticate method includes the full file to be uploaded. This results in the Object.keys(...) function being called with a full buffer of file data.

It appears that Object.keys handles this poorly. When receiving a Buffer, it attempts to traverse recursively for all keys, returning the full buffer as a key set. For a 60MB test file, this used about 3750MB of memory. This grows exponentially with the file size.

Would it be possible to add an extra conditional to check for body instanceof Buffer to allow a quick-skip over large files? Alternatively, we could simply remove this block of code completely - though it's there for a reason so I'm sure this isn't the way to go.

Thanks so much for your constant assistance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant