Replies: 1 comment 1 reply
-
hey @nixent this will be possible soon, as sling will be using duckdb under the hood to read/write parquet files (allowing partitioning). Stay tuned. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'd like to run a backfill from SQL server to parquet files stored in ADLS and partition target data by date parts of
update_key
column, likeraw/my_system/{stream_table}/YYYY({update_key})/MM({update_key})/DD({update_key})
Currently timestamp variables YYYY, DD, MM are calculated based on run timestamp. Is there any way to make it to take
update_key
instead?As a workaround I'm running multiple backfills and controlling partitions like this:
Beta Was this translation helpful? Give feedback.
All reactions