-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Invalid setting when creating temporary tables with ON CLUSTER
#366
Comments
Is there any reason you can't just use a MergeTree table? Temporary tables have all kinds of gotchas, especially in a clustered environment, in ClickHouse Cloud, or if your server is behind a load balancer. |
We have a multi-cluster node so I would like the uploaded results to be accessible on all nodes, otherwise the specific node must be configured for all clients. These tables are created as an intermediate by elementary. Besides, even using MergeTree does not solve the actual issue which is a bug. The temporary tables are being created by the elementary package and not explicitly by my dbt project. Currently it is the most compatible observability toolkit for dbt, since |
That sounds like an incompatibility in the elementary package. The data in a temporary (in memory) table on one node will not be available on other nodes in the cluster. So anything you do in a temporary table will be by definition "not accessible on all nodes", and if that package requires temporary tables it's extremely unlikely it will work on a clustered ClickHouse installation. |
I am not contesting this but seems like a red herring in this context. Either
The final tables are created in a clustered table, however the intermediate results are stored in a supposedly temporary table. The flow is that It would be really great if this could be resolved since it is seemingly the only issue preventing some (limited) obeservability on our dbt stack using elementary and ClickHouse. |
Fair point on the You are correct, it's technically a bug. It might get fixed faster if you submit a PR. :) |
Hi!
When trying to create a temporary table with the
cluster
property set, creating temporary tables withEngine=Memory
passes thereplicated_deduplication_window
setting which is invalid. It seems that thereplicated_deduplication_window
setting only checks for the materialization type but does not check theEngine
which it should.I think that the
update_model_settings
method should accept engine type and only allow settings that are available for each engine.Seeing this issue when trying to run https://github.com/elementary-data/dbt-data-reliability because it tries to create a temporary table when artifacts change.
Steps to reproduce
dbt run -m elementary
Expected behaviour
No errors should be expected. The temporary table should be created in the connected node in memory.
Code examples, such as models or profile settings
dbt and/or ClickHouse server logs
Configuration
Environment
clickhouse-server
docker image.ClickHouse server
CREATE TABLE
statements for tables involved: See aboveThe text was updated successfully, but these errors were encountered: