-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Command which 'run's, then 'test's (in the same command). #2234
Comments
run
s, then test
s (in the same command).
Thanks for opening @doctoryes! We're still thinking about the exact mechanism we should employ here, but I very much buy this use-case and it's one that I'm keen to support! |
To implement this feature, let's add a flag Notes:
|
Would it make sense to also specify |
That's a great question, which I would take a step further: how should we be passing task-specific flags to a generalized command? As I noted in #2743, this is even trickier for same-named flags that do subtly different things depending on the task type. |
Hello ! Having a little time to get back to dbt, I read only today that thread, and would like to suggest this link to our discourse design by contrat. this could help clarify (imho) the "build" idea. I tried to figure out how to add to dbt not only code testing, but mainly correctness validation. |
Describe the feature
On a large DBT project with many sources/models, DBT startup time affects development velocity when iterating on models. A common development pattern is to
dbt run -m <mymodel>
, immediately followed by (if the run succeeds)dbt test -m <mymodel>
. Adbt run_then_test
command (with a better name/design?) would be nice, which - in the context of a single DBT command - both runs and tests the model, saving the startup time of thedbt test
.Describe alternatives you've considered
I'm unaware of any alternatives here. This sort of functionality would need to be within DBT itself.
Additional context
Although a similar issue exists here:
#1054
this issue is different. I'm not asking for the results of the
dbt run
to be rolled-back ifdbt test
fails, which is typically unnecessary while iterating in development.Who will this benefit?
This feature would aid any developers who are iterating on models for a large-scale DBT project that has significant DBT startup time overhead (~30-60 seconds).
The text was updated successfully, but these errors were encountered: