You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
If you are interested in working on this issue or have submitted a pull request, please leave a comment
Problem
We see vector (as an agent) is uploading hundreds of tiny files to S3,
but we expect files to be much larger. We want fewer files to optimized with
SQS's 10-message receive limit.
To confirm, are you seeing multiple files created for the same keey prefix? I'm noticing your key prefix includes %s which will partition batches by second in addition to cluster id and node name.
A note for the community
Problem
We see vector (as an agent) is uploading hundreds of tiny files to S3,
but we expect files to be much larger. We want fewer files to optimized with
SQS's 10-message receive limit.
Looking at one of the files, we see it contains 16 messages. The uncompressed
file size is 29K. The events in the file happened over 0.242 seconds.
The count, size, and time does not conform to the configured settings and
https://vector.dev/docs/reference/configuration/sinks/aws_s3/#buffers-and-batches
In single hour an agent uploaded 2,319 files containing 8,863 messages which total 10,480,814 bytes.
Configuration
Version
0.32.1
Debug Output
No response
Example Data
No response
Additional Context
No response
References
No response
The text was updated successfully, but these errors were encountered: