You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm running into an issue where the connector's JVM exists with an OOM error when sinking a topic freshly snapshotted through a PG database through Debezium. The setup:
2 topics, topic A with 500k messages, topic B with 200
Avro format for key and value
Source connector produces messages with no issues, messages make it in Kafka
Topic A message size is similar to Topic B - Total size of topic A is 214kb, total size of topic B is 180Mb
template file config is "file.name.template": "{{topic}}/{{partition}}-{{start_offset}}-{{timestamp:unit=yyyy}}{{timestamp:unit=MM}}{{timestamp:unit=dd}}-{{timestamp:unit=HH}}.parquet.gz"
output format is parquet
This is all running on Aiven.
Topic A successfully sinks into GCS. The parquet file gets uploaded and all the data that we expect is there. Topic B consistently runs OOM.
We've tried a variety of values for file.max.records ranging from 50 to 1000, and for offset.flush.interval.ms, lowest being 50ms, but we still experience the OOMs
Part of the issue we believe is coming from the fact that since this starts with a PG snapshot, the timestamps are all within an hour of each other for the 1M records already in the topic. Therefore the connector's grouping logic would consider the entire topic's content to be part of 1 group - and if the GCS connector behaves the same as the S3 one, we thought this could be an indication - https://help.aiven.io/en/articles/4775651-kafka-outofmemoryerror-exceptions. However, we would've expected the file.max.records to compensate for this.
Also while ugrading plans is an option, we'd like to understand what knobs to turn to control memory utilization. Full cleaned up config attached:
@mkokho that might be a little trickier because it's fully managed by Aiven - Here are all the details I have access to:
3 node cluster, 1 CPU per cluster, 600Gb storage - from the logs, it looks like the connectors start with 768Mb heap. I believe that other than us increasing the max message size (we do have some rows that have blobs), everything else is the default config.
We did also test with a dataset of 1M rows, with no blobs where the size per message is predictable and ended up having the same issue...
I'm running into an issue where the connector's JVM exists with an OOM error when sinking a topic freshly snapshotted through a PG database through Debezium. The setup:
"file.name.template": "{{topic}}/{{partition}}-{{start_offset}}-{{timestamp:unit=yyyy}}{{timestamp:unit=MM}}{{timestamp:unit=dd}}-{{timestamp:unit=HH}}.parquet.gz"
This is all running on Aiven.
Topic A successfully sinks into GCS. The parquet file gets uploaded and all the data that we expect is there. Topic B consistently runs OOM.
We've tried a variety of values for
file.max.records
ranging from 50 to 1000, and foroffset.flush.interval.ms
, lowest being 50ms, but we still experience the OOMsPart of the issue we believe is coming from the fact that since this starts with a PG snapshot, the timestamps are all within an hour of each other for the 1M records already in the topic. Therefore the connector's grouping logic would consider the entire topic's content to be part of 1 group - and if the GCS connector behaves the same as the S3 one, we thought this could be an indication - https://help.aiven.io/en/articles/4775651-kafka-outofmemoryerror-exceptions. However, we would've expected the
file.max.records
to compensate for this.Also while ugrading plans is an option, we'd like to understand what knobs to turn to control memory utilization. Full cleaned up config attached:
Any insight into what might be happening?
The text was updated successfully, but these errors were encountered: