Skip to content

Commit

Permalink
Merge branch 'current' into BrJan-patch-2
Browse files Browse the repository at this point in the history
  • Loading branch information
mirnawong1 authored Aug 22, 2023
2 parents 66a8389 + b680288 commit 1b38097
Show file tree
Hide file tree
Showing 3 changed files with 111 additions and 4 deletions.
85 changes: 85 additions & 0 deletions website/docs/docs/collaborate/govern/model-contracts.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,91 @@ When building a model with a defined contract, dbt will do two things differentl
1. dbt will run a "preflight" check to ensure that the model's query will return a set of columns with names and data types matching the ones you have defined. This check is agnostic to the order of columns specified in your model (SQL) or YAML spec.
2. dbt will include the column names, data types, and constraints in the DDL statements it submits to the data platform, which will be enforced while building or updating the model's table.

## Platform constraint support

Select the adapter-specific tab for more information on [constraint](/reference/resource-properties/constraints) support across platforms. Constraints fall into three categories based on support and platform enforcement:

- **Supported and enforced** — The model won't build if it violates the constraint.
- **Supported and not enforced** — The platform supports specifying the type of constraint, but a model can still build even if building the model violates the constraint. This constraint exists for metadata purposes only. This is common for modern cloud data warehouses and less common for legacy databases.
- **Not supported and not enforced** — You can't specify the type of constraint for the platform.



<Tabs>

<TabItem value="Redshift" label="Redshift">

| Constraint type | Support | Platform enforcement |
|:----------------|:-------------|:------------------|
| not_null | ✅ Supported | ✅ Enforced |
| primary_key | ✅ Supported | ❌ Not enforced |
| foreign_key | ✅ Supported | ❌ Not enforced |
| unique | ✅ Supported | ❌ Not enforced |
| check | ❌ Not supported | ❌ Not enforced |

</TabItem>
<TabItem value="Snowflake" label="Snowflake">

| Constraint type | Support | Platform enforcement |
|:----------------|:-------------|:---------------------|
| not_null | ✅ Supported | ✅ Enforced |
| primary_key | ✅ Supported | ❌ Not enforced |
| foreign_key | ✅ Supported | ❌ Not enforced |
| unique | ✅ Supported | ❌ Not enforced |
| check | ❌ Not supported | ❌ Not enforced |

</TabItem>
<TabItem value="BigQuery" label="BigQuery">

| Constraint type | Support | Platform enforcement |
|:-----------------|:-------------|:---------------------|
| not_null | ✅ Supported | ✅ Enforced |
| primary_key | ✅ Supported | ✅ Enforced |
| foreign_key | ✅ Supported | ✅ Enforced |
| unique | ❌ Not supported | ❌ Not enforced |
| check | ❌ Not supported | ❌ Not enforced |

</TabItem>
<TabItem value="Postgres" label="Postgres">

| Constraint type | Support | Platform enforcement |
|:----------------|:-------------|:--------------------|
| not_null | ✅ Supported | ✅ Enforced |
| primary_key | ✅ Supported | ✅ Enforced |
| foreign_key | ✅ Supported | ✅ Enforced |
| unique | ✅ Supported | ✅ Enforced |
| check | ✅ Supported | ✅ Enforced |

</TabItem>
<TabItem value="Spark" label="Spark">

Currently, `not_null` and `check` constraints are supported and enforced only after a model builds. Because of this platform limitation, dbt considers these constraints `supported` but `not enforced`, which means they're not part of the "model contract" since these constraints can't be enforced at build time. This table will change as the features evolve.

| Constraint type | Support | Platform enforcement |
|:----------------|:------------|:---------------------|
| not_null | ✅ Supported | ❌ Not enforced |
| primary_key | ✅ Supported | ❌ Not enforced |
| foreign_key | ✅ Supported | ❌ Not enforced |
| unique | ✅ Supported | ❌ Not enforced |
| check | ✅ Supported | ❌ Not enforced |

</TabItem>
<TabItem value="Databricks" label="Databricks">

Currently, `not_null` and `check` constraints are supported and enforced only after a model builds. Because of this platform limitation, dbt considers these constraints `supported` but `not enforced`, which means they're not part of the "model contract" since these constraints can't be enforced at build time. This table will change as the features evolve.

| Constraint type | Support | Platform enforcement |
|:----------------|:-------------|:---------------------|
| not_null | ✅ Supported | ❌ Not enforced |
| primary_key | ✅ Supported | ❌ Not enforced |
| foreign_key | ✅ Supported | ❌ Not enforced |
| unique | ✅ Supported | ❌ Not enforced |
| check | ✅ Supported | ❌ Not enforced |

</TabItem>
</Tabs>


## FAQs

### Which models should have contracts?
Expand Down
29 changes: 26 additions & 3 deletions website/docs/docs/core/pip-install.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,37 @@ description: "You can use pip to install dbt Core and adapter plugins from the c

You need to use `pip` to install dbt Core on Windows or Linux operating systems. You can use `pip` or [Homebrew](/docs/core/homebrew-install) for installing dbt Core on a MacOS.

You can install dbt Core and plugins using `pip` because they are Python modules distributed on [PyPI](https://pypi.org/project/dbt/). We recommend using virtual environments when installing with `pip`.

You can install dbt Core and plugins using `pip` because they are Python modules distributed on [PyPI](https://pypi.org/project/dbt/).

<FAQ path="Core/install-pip-os-prereqs" />
<FAQ path="Core/install-python-compatibility" />
<FAQ path="Core/install-pip-best-practices" />

### Using virtual environments
We recommend using virtual environments (venv) to namespace pip modules.

1. Create a new venv:

```shell
python3 -m venv dbt-env # create the environment
```

2. Activate that same virtual environment each time you create a shell window or session:

```shell
source dbt-env/bin/activate # activate the environment for Mac and Linux OR
dbt-env\Scripts\activate # activate the environment for Windows
```

#### Create an alias
To activate your dbt environment with every new shell window or session, you can create an alias for the source command in your $HOME/.bashrc, $HOME/.zshrc, or whichever config file your shell draws from.

For example, add the following to your rc file, replacing <PATH_TO_VIRTUAL_ENV_CONFIG> with the path to your virtual environment configuration.

```shell
alias env_dbt='source <PATH_TO_VIRTUAL_ENV_CONFIG>/bin/activate'
```

### Installing the adapter
Once you know [which adapter](/docs/supported-data-platforms) you're using, you can install it as `dbt-<adapter>`. For example, if using Postgres:

```shell
Expand Down
1 change: 0 additions & 1 deletion website/docs/reference/dbt-classes.md
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,6 @@ The execution of a resource in dbt generates a `Result` object. This object cont
- `thread_id`: Which thread executed this node? E.g. `Thread-1`
- `execution_time`: Total time spent executing this node, measured in seconds.
- `timing`: Array that breaks down execution time into steps (often `compile` + `execute`)
- `adapter_response`: Dictionary of metadata returned from the database, which varies by adapter. E.g. success `code`, number of `rows_affected`, total `bytes_processed`, etc.
- `message`: How dbt will report this result on the CLI, based on information returned from the database

import RowsAffected from '/snippets/_run-result.md';
Expand Down

0 comments on commit 1b38097

Please sign in to comment.