-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
awskinesisexporter: Add support for partitioning records by traceId #32027
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
@preetsarb have you considered using layered collectors connected with load balancing exporter to shard by trace ID? Could be an option in your interim. Another problem I foresee with this is the Kinesis limit of 1MB per second per shard. Traces can easily be over 1MB so it begs the question how can that limit be avoided at scale. |
@jamesmoessis we have been using something similar to load balancing exporter in our trace processing stack, which is hosted in AWS. The problem we have been facing is that current solution results in a lot of data transfer b/w instances across different AWS availability-zones which contributes to a big chunk of cost. As for the issue with shard data rate limit, I don't think this change would have any impact. Mapping of traceIds to specific shard should be random enough to still have even distribution of records across shards. But will keep an eye out for any issues with hot partitioning. Created a pull request to add this feature #33563 |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Component(s)
exporter/awskinesis
Is your feature request related to a problem? Please describe.
Currently spans of a single trace can be spread over multiple shards, so its not possible (or require peer forwarding of spans) to perform tail sampling when consuming data from kinesis.
Describe the solution you'd like
Kinesis supports partitioning of records. traceId can be used as the partitionKey so all spans are routed to the same shard.
Describe alternatives you've considered
No response
Additional context
Similar feature was added for kafka exporter: #12318
The text was updated successfully, but these errors were encountered: