Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding replace_where incremental strategy (#293) #310

Merged
merged 1 commit into from
Apr 13, 2023
Merged

Conversation

andrefurlan-db
Copy link
Collaborator

@andrefurlan-db andrefurlan-db commented Apr 13, 2023

Description

This PR adds a new incremental strategy replace_where. The strategy resolves to an INSERT INTO ... REPLACE WHERE statement. This completes the feature set explained here: https://docs.databricks.com/delta/selective-overwrite.html#replace-where&language-python

A lot of the code change is to bring macros from dbt-spark over to dbt-databricks. The only real code change was in validating incremental strategies and adding the replace_where strategy.

Why do we need it?

It enables use cases where part of the data is always replaced and where MERGE is not possible, such as when there is no primary key. E.g.: events table where we want to always replace the last 3 days.

Difference from insert_overwrite

Insert overwrite only works with dynamic partition pruning spark setting, which is not available in sql warehouses or any Unity Catalog-enabled cluster. It also only works with whole partitions, making it difficult to set up and assure that the correct data is dropped.

Checklist

  • I have run this code in development and it appears to resolve the stated issue
  • This PR includes tests, or tests are not required/relevant for this PR
  • I have updated the CHANGELOG.md and added information about my change to the "dbt-databricks next" section.

This PR adds a new incremental strategy replace_where. The strategy resolves to an INSERT INTO ... REPLACE WHERE statement. This completes the feature set explained here: https://docs.databricks.com/delta/selective-overwrite.html#replace-where&language-python

A lot of the code change is to bring macros from dbt-spark over to dbt-databricks. The only real code change was in validating incremental strategies and adding the replace_where strategy.

**Why do we need it?**
It enables use cases where part of the data is always replaced and where MERGE is not possible, such as when there is no primary key. E.g.: events table where we want to always replace the last 3 days.

**Difference from insert_overwrite**
Insert overwrite only works with dynamic partition pruning spark setting, which is not available in sql warehouses or any Unity Catalog-enabled cluster. It also only works with whole partitions, making it difficult to set up and assure that the correct data is dropped.



Signed-off-by: Andre Furlan <[email protected]>
@andrefurlan-db andrefurlan-db temporarily deployed to azure-prod-pecou April 13, 2023 17:31 — with GitHub Actions Inactive
@andrefurlan-db andrefurlan-db temporarily deployed to azure-prod-peco April 13, 2023 17:31 — with GitHub Actions Inactive
@andrefurlan-db andrefurlan-db merged commit ed59030 into main Apr 13, 2023
@andrefurlan-db andrefurlan-db deleted the staging-293 branch April 13, 2023 18:06
andrefurlan-db added a commit that referenced this pull request Apr 19, 2023
This PR adds a new incremental strategy `replace_where`. The strategy resolves to an `INSERT INTO ... REPLACE WHERE` statement. It completes the feature set explained here: https://docs.databricks.com/delta/selective-overwrite.html#replace-where&language-python

A lot of the code change is to bring macros from dbt-spark over to dbt-databricks. The only real code change was in validating incremental strategies and adding the replace_where strategy.

#### Why do we need it?
It enables use cases where part of the data is always replaced and where MERGE is not possible, such as when there is no primary key. E.g.: events table where we want to always replace the last 3 days.

#### Difference from insert_overwrite
Insert overwrite only works with dynamic partition pruning spark setting, which is not available in sql warehouses or any Unity Catalog-enabled cluster. It also only works with whole partitions, making it difficult to set up and assure that the correct data is dropped.


Signed-off-by: Andre Furlan <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants