You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Integration between spark and dbt should be configured as a thirft server or session when running locally, and I think this can be to allocate more resources than necessary unless it requires multi-node processing.
I think with the dbt-duckdb and the built-in delta-rs or pyiceberg plug-in, we can dramatically reduce this weight in the composition of the aio directory docker files that you've constructed in ngods stocks
In addition, dbt-duckdb currently experimentally supports the iceberg external table option, and delta external table is also being developed by recent PR.
I'd like to ask for your opinion
The text was updated successfully, but these errors were encountered:
Integration between
spark
anddbt
should be configured as athirft
server orsession
when running locally, and I think this can be to allocate more resources than necessary unless it requires multi-node processing.I think with the
dbt-duckdb
and the built-indelta-rs
orpyiceberg
plug-in, we can dramatically reduce this weight in the composition of theaio
directory docker files that you've constructed inngods stocks
In addition,
dbt-duckdb
currently experimentally supports theiceberg external table
option, anddelta external table
is also being developed by recent PR.I'd like to ask for your opinion
The text was updated successfully, but these errors were encountered: