-
Notifications
You must be signed in to change notification settings - Fork 374
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Methods that perform a fetch should throw any socket errors directly (ECONNRESET) #2482
Comments
Fascinating, do you mind sharing a reproducible snippet? This should be captured via |
const crypto = require("node:crypto");
const fs = require("node:fs/promises");
// Replace with organization-specific values
const PROJECT_ID = "";
const BUCKET_NAME = "";
const NUM_TEST_FILES = 10_000;
const SOURCE_DIRECTORY = "/tmp/gcs-example";
async function createTestFiles() {
const files = [];
await fs.mkdir(SOURCE_DIRECTORY, { recursive: true });
for (let i = 0; i < NUM_TEST_FILES; i++) {
const uuid = crypto.randomUUID();
const file = `${SOURCE_DIRECTORY}/${uuid}`;
await fs.writeFile(file, uuid);
files.push(file);
}
return files;
}
(async function main() {
const files = await createTestFiles();
const gcs = new Storage({ projectId: PROJECT_ID });
const bucket = gcs.bucket(BUCKET_NAME);
try {
await Promise.all(files.map((file) => bucket.upload(file)));
} catch (err) {
// This should be able to catch the socket error but it does not
}
})(); Here's a snippet that should be able to reproduce it. It will attempt to perform 10k concurrent uploads which is likely sufficient to encounter the socket error, but you may find that the content size of the files that are being created needs to be increased to be sufficient to trigger the issue |
That's strange - I'm able to capture the error just fine locally with the same code provided, with a few minor mods:
import crypto from "node:crypto";
import fs from "node:fs/promises";
import { Storage } from "@google-cloud/storage";
// Replace with organization-specific values
const NUM_TEST_FILES = 10_000;
const SOURCE_DIRECTORY = "/tmp/gcs-example";
async function createTestFiles() {
const files = [];
await fs.mkdir(SOURCE_DIRECTORY, { recursive: true });
for (let i = 0; i < NUM_TEST_FILES; i++) {
const uuid = crypto.randomUUID();
const file = `${SOURCE_DIRECTORY}/${uuid}`;
await fs.writeFile(file, uuid);
files.push(file);
}
return files;
}
async function main() {
console.log(`Creating ${NUM_TEST_FILES} test file(s)...`);
const files = await createTestFiles();
console.log("...created.");
const gcs = new Storage();
const bucket = gcs.bucket(process.env.BUCKET_NAME);
try {
await Promise.all(files.map((file) => bucket.upload(file)));
} catch (err) {
// This should be able to catch the socket error but it does not
console.dir({ err });
}
console.log("done.");
}
await main(); Which version of storage are you using? |
*ah, I'm assuming |
We're currently on Is it the |
There were a number of changes upstream that may have resolved this; after digging I wasn't able to point to a particular change that would have fixed this.
Yep, I see that error being caught. |
@ryanwilsonperkin @danielbankhead it sounds like this has been resolved? |
I intend to come back to this when work slows down a bit, but the issue still appears to be present in our environment. Going to see if I can get a more representative reproduction case |
I can also reproduce this (inconsistently), but only in our application. Using the scripts above, I never get ECONNRESET - but I do get other errors, which are handled correctly. Here's an example of the error we see:
|
@smcgivern I’m not too familiar with |
pnpm's just an npm replacement, but the library version point is a fair one - having upgraded, I haven't been able to reproduce yet (although I wasn't able to reproduce consistently in the first place). I will report back if I see the same problem on the latest versions of this library + gaxios. |
Thanks, closing for now. |
Related: googleapis/google-cloud-node#2254
Environment details
@google-cloud/storage
Steps to reproduce
When running multiple uploads in parallel, we frequently encounter
ECONNRESET
socket errors such asThese happen for us using the
Bucket#upload
method, and we would like to be able to catch the error and handle it appropriately, but the error is thrown in such a way that it can't be caught by a try/catch on Bucket#upload an instead needs to be captured at the process level by an event handler.When GCS is instantiating the node-fetch client, the client should capture any FetchError and reject the promise with it.
The text was updated successfully, but these errors were encountered: