-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Batching #13
Comments
olizilla
added a commit
that referenced
this issue
Jul 22, 2022
adds an pickupBatch impl for handleMessageBatch so we can do more than 1 at a time see: #13 License: (Apache-2.0 AND MIT) Signed-off-by: Oli Evans <[email protected]>
olizilla
added a commit
that referenced
this issue
Aug 3, 2022
adds an pickupBatch impl for handleMessageBatch so we can do more than 1 at a time see: #13 License: (Apache-2.0 AND MIT) Signed-off-by: Oli Evans <[email protected]>
Closed with PR: #73 |
olizilla
added a commit
that referenced
this issue
Mar 14, 2023
- switch to an sqs lib that polls for new messages concurrently rather than in batches. **This is rad** as now we'll make better use of each container! - treat timeouts as a regular failure. Let the message go back on the queue for another node to try. After 3 goes it'll go to the dead letter queue and be marked as failed. This is fine, and simplifies the pickup worker a lot, as it doesn't need to talk to dynamo or determine the cause of an error. - rewrite pickup worker so we can compose it out of single-responsibility pieces instead of having to pass through the giant config ball. _It's so much simpler now!_ You can figure our what it does from it's parts! `sqsPoller` + `carFetcher` + `s3Uploader` ```js const pickup = createPickup({ sqsPoller: createSqsPoller({ queueUrl: SQS_QUEUE_URL, maxInFlight: BATCH_SIZE }), carFetcher: new CarFetcher({ ipfsApiUrl: IPFS_API_URL, fetchTimeoutMs: TIMEOUT_FETCH }), s3Uploader: new S3Uploader({ bucket: VALIDATION_BUCKET }) }) ``` see: https://github.com/PruvoNet/squiss-ts/ fixes #13 fixes #116 fixes #101 License: MIT --------- Signed-off-by: Oli Evans <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
We have the worst possible scaling set up right now. 1 request -> 1 worker node. 0 parallelisation or batching. It's dial-a-kubo. Each request gets it's own kubo, and doesn't let go until the request is done or we timeout.
We can process messages in batches of ~5, limited by the 200GB storage max in fargate currently.
TODO
handleMessageBatch
- https://github.com/bbc/sqs-consumer#optionsThe text was updated successfully, but these errors were encountered: