Replies: 2 comments 1 reply
-
The logic is correct, but we have considered changing it based on some other discussions (see: #382). We are currently looking at ways to allow multiple source tables to be represented in records and then routed by the Postgres Destination connector to their appropriate table, thus handling a many to many table relationship for a given pipeline, but not final design has been selected. The problem lies in deciding where to put that routing data, as #382 discusses. |
Beta Was this translation helpful? Give feedback.
-
@maksenius 👋 I just ran into this discussion item, and would like to check-in. We recently announced support for multiple collections: https://meroxa.com/blog/conduit-0.10-comes-with-multiple-collections-support/, and our built-in PostgreSQL connector has included support for multiple tables since version 0.7.0. https://github.com/ConduitIO/conduit-connector-postgres/releases/tag/v0.7.0 Would like to know if this solution meets your needs? Please let me know if anything is missing. Thank you! |
Beta Was this translation helpful? Give feedback.
-
I see logic in postgres connector about inserting data:
if record metadata has key table it has more priority than table variable from config.
https://github.com/ConduitIO/conduit-connector-postgres/blob/ed97ce7be342c3d2ab2a78346d1325a141870533/destination/destination.go#L322
Is it correct logic?
Because if we want to create 2 destination connectors with different table names from 1 source it will be problem.
Beta Was this translation helpful? Give feedback.
All reactions