You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
If you are interested in working on this issue or have submitted a pull request, please leave a comment
Problem
Im using source http_endpoint to receive data from remote vector (its source is journald).
From this source Im sending data to several sinks.
But I found that when one of the sinks dies, it stops sending to the others. Its not dying imediatelly, it sends just few more lines to remaining sinks (few hundred lines of logs) but then it stop sending at all.
I wanted to have a file sink as a backup in case the database goes down, but that doesn't make it possible.
I have configuration like this and all sinks stops "sinking" when clickhouse goes down. Even a file sink is above then clickhouse in the pipeline (if it matters):
Situation when clickhouse dies:
After the faulty sink is started again, the data is written to all sinks at the same time.
Nothing is missing, they was somehow cached.
Vector logs only reports sink is down:
WARN sink{component_kind="sink" component_id=ch-local component_type=clickhouse}:request{request_id=2}:http: vector::internal_events::http_client: HTTP error. error=error trying to connect: dns error: failed to lookup address information: Try again error_type="request_failed" stage="processing" internal_log_rate_limit=true
WARN sink{component_kind="sink" component_id=ch-local component_type=clickhouse}:request{request_id=2}: vector::sinks::util::retries: Retrying after error. error=Failed to make HTTP(S) request: error trying to connect: dns error: failed to lookup address information: Try again internal_log_rate_limit=true
Configuration
debian 12
Version
vector:latest 0.43.0
Debug Output
No response
Example Data
No response
Additional Context
No response
References
No response
The text was updated successfully, but these errors were encountered:
Hi @robinpecha ! This is expected behavior https://vector.dev/docs/about/concepts/#backpressure has more details. You can avoid this by configuring the buffer for a sink to drop events rather than apply back-pressure for the one you expect downtime for (in this case clickhouse).
A note for the community
Problem
Im using source http_endpoint to receive data from remote vector (its source is journald).
From this source Im sending data to several sinks.
But I found that when one of the sinks dies, it stops sending to the others. Its not dying imediatelly, it sends just few more lines to remaining sinks (few hundred lines of logs) but then it stop sending at all.
I wanted to have a file sink as a backup in case the database goes down, but that doesn't make it possible.
I have configuration like this and all sinks stops "sinking" when clickhouse goes down. Even a file sink is above then clickhouse in the pipeline (if it matters):
Situation when clickhouse dies:
After the faulty sink is started again, the data is written to all sinks at the same time.
Nothing is missing, they was somehow cached.
Vector logs only reports sink is down:
Configuration
Version
vector:latest 0.43.0
Debug Output
No response
Example Data
No response
Additional Context
No response
References
No response
The text was updated successfully, but these errors were encountered: