-
Notifications
You must be signed in to change notification settings - Fork 52
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Adding specific connector version for Supabase.
- Loading branch information
Showing
1 changed file
with
201 additions
and
0 deletions.
There are no files selected for viewing
201 changes: 201 additions & 0 deletions
201
site/docs/reference/Connectors/capture-connectors/PostgreSQL/Supabase.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,201 @@ | ||
--- | ||
sidebar_position: 6 | ||
--- | ||
# Supabase | ||
|
||
This connector uses change data capture (CDC) to continuously capture updates in a Supabase PostgreSQL database into one or more Flow collections. | ||
|
||
It is available for use in the Flow web application. For local development or open-source workflows, [`ghcr.io/estuary/source-postgres:dev`](https://github.com/estuary/connectors/pkgs/container/source-postgres) provides the latest version of the connector as a Docker image. You can also follow the link in your browser to see past image versions. | ||
|
||
## Supported versions and platforms | ||
|
||
This connector supports all Supabase PostgreSQL instances. | ||
|
||
## Prerequisites | ||
|
||
You'll need a Supabase PostgreSQL database setup with the following: | ||
* [Logical replication enabled](https://www.postgresql.org/docs/current/runtime-config-wal.html) — `wal_level=logical` | ||
* [User role](https://www.postgresql.org/docs/current/sql-createrole.html) with `REPLICATION` attribute | ||
* A [replication slot](https://www.postgresql.org/docs/current/warm-standby.html#STREAMING-REPLICATION-SLOTS). This represents a “cursor” into the PostgreSQL write-ahead log from which change events can be read. | ||
* Optional; if none exist, one will be created by the connector. | ||
* If you wish to run multiple captures from the same database, each must have its own slot. | ||
You can create these slots yourself, or by specifying a name other than the default in the advanced [configuration](#configuration). | ||
* A [publication](https://www.postgresql.org/docs/current/sql-createpublication.html). This represents the set of tables for which change events will be reported. | ||
* In more restricted setups, this must be created manually, but can be created automatically if the connector has suitable permissions. | ||
* A watermarks table. The watermarks table is a small “scratch space” to which the connector occasionally writes a small amount of data to ensure accuracy when backfilling preexisting table contents. | ||
* In more restricted setups, this must be created manually, but can be created automatically if the connector has suitable permissions. | ||
|
||
:::tip Configuration Tip | ||
To configure this connector to capture data from databases hosted on your internal network, you must set up SSH tunneling. For more specific instructions on setup, see [configure connections with SSH tunneling](/guides/connect-network/). | ||
::: | ||
|
||
Additional Prequisties: | ||
|
||
* A Supabase IPV4 address. This can be configured under "Project Settings" -> "Add ons" within Supabase's UI. | ||
|
||
## Setup | ||
|
||
|
||
The simplest way to meet the above prerequisites is to change the WAL level and have the connector use a database superuser role. | ||
|
||
For a more restricted setup, create a new user with just the required permissions as detailed in the following steps: | ||
|
||
1. Connect to your instance and create a new user and password: | ||
```sql | ||
CREATE USER flow_capture WITH PASSWORD 'secret' REPLICATION; | ||
``` | ||
2. Assign the appropriate role. | ||
1. If using PostgreSQL v14 or later: | ||
|
||
```sql | ||
GRANT pg_read_all_data TO flow_capture; | ||
``` | ||
|
||
2. If using an earlier version: | ||
|
||
```sql | ||
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES to flow_capture; | ||
GRANT SELECT ON ALL TABLES IN SCHEMA public, <other_schema> TO flow_capture; | ||
GRANT SELECT ON ALL TABLES IN SCHEMA information_schema, pg_catalog TO flow_capture; | ||
``` | ||
|
||
where `<other_schema>` lists all schemas that will be captured from. | ||
:::info | ||
If an even more restricted set of permissions is desired, you can also grant SELECT on | ||
just the specific table(s) which should be captured from. The ‘information_schema’ and | ||
‘pg_catalog’ access is required for stream auto-discovery, but not for capturing already | ||
configured streams. | ||
::: | ||
3. Create the watermarks table, grant privileges, and create publication: | ||
|
||
```sql | ||
CREATE TABLE IF NOT EXISTS public.flow_watermarks (slot TEXT PRIMARY KEY, watermark TEXT); | ||
GRANT ALL PRIVILEGES ON TABLE public.flow_watermarks TO flow_capture; | ||
CREATE PUBLICATION flow_publication; | ||
ALTER PUBLICATION flow_publication SET (publish_via_partition_root = true); | ||
ALTER PUBLICATION flow_publication ADD TABLE public.flow_watermarks, <other_tables>; | ||
``` | ||
|
||
where `<other_tables>` lists all tables that will be captured from. The `publish_via_partition_root` | ||
setting is recommended (because most users will want changes to a partitioned table to be captured | ||
under the name of the root table) but is not required. | ||
|
||
4. Set WAL level to logical: | ||
```sql | ||
ALTER SYSTEM SET wal_level = logical; | ||
``` | ||
5. Restart PostgreSQL to allow the WAL level change to take effect. | ||
|
||
|
||
## Backfills and performance considerations | ||
|
||
When the a PostgreSQL capture is initiated, by default, the connector first *backfills*, or captures the targeted tables in their current state. It then transitions to capturing change events on an ongoing basis. | ||
|
||
This is desirable in most cases, as in ensures that a complete view of your tables is captured into Flow. | ||
However, you may find it appropriate to skip the backfill, especially for extremely large tables. | ||
|
||
In this case, you may turn of backfilling on a per-table basis. See [properties](#properties) for details. | ||
|
||
## Configuration | ||
|
||
You configure connectors either in the Flow web app, or by directly editing the catalog specification file. | ||
See [connectors](/concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the PostgreSQL source connector. | ||
|
||
|
||
### Properties | ||
|
||
#### Endpoint | ||
|
||
| Property | Title | Description | Type | Required/Default | | ||
|---------------------------------|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------|---------|----------------------------| | ||
| **`/address`** | Address | The host or host:port at which the database can be reached. | string | Required | | ||
| **`/database`** | Database | Logical database name to capture from. | string | Required, `"postgres"` | | ||
| **`/user`** | User | The database user to authenticate as. | string | Required, `"flow_capture"` | | ||
| **`/password`** | Password | Password for the specified database user. | string | Required | | ||
| `/advanced` | Advanced Options | Options for advanced users. You should not typically need to modify these. | object | | | ||
| `/advanced/backfill_chunk_size` | Backfill Chunk Size | The number of rows which should be fetched from the database in a single backfill query. | integer | `4096` | | ||
| `/advanced/publicationName` | Publication Name | The name of the PostgreSQL publication to replicate from. | string | `"flow_publication"` | | ||
| `/advanced/skip_backfills` | Skip Backfills | A comma-separated list of fully-qualified table names which should not be backfilled. | string | | | ||
| `/advanced/slotName` | Slot Name | The name of the PostgreSQL replication slot to replicate from. | string | `"flow_slot"` | | ||
| `/advanced/watermarksTable` | Watermarks Table | The name of the table used for watermark writes during backfills. Must be fully-qualified in '<schema>.<table>' form. | string | `"public.flow_watermarks"` | | ||
| `/advanced/sslmode` | SSL Mode | Overrides SSL connection behavior by setting the 'sslmode' parameter. | string | | | ||
|
||
#### Bindings | ||
|
||
| Property | Title | Description | Type | Required/Default | | ||
|------------------|-----------|--------------------------------------------------------------------------------------------|--------|------------------| | ||
| **`/namespace`** | Namespace | The [namespace/schema](https://www.postgresql.org/docs/9.1/ddl-schemas.html) of the table. | string | Required | | ||
| **`/stream`** | Stream | Table name. | string | Required | | ||
| **`/syncMode`** | Sync mode | Connection method. Always set to `incremental`. | string | Required | | ||
|
||
#### SSL Mode | ||
|
||
Certain managed PostgreSQL implementations may require you to explicitly set the [SSL Mode](https://www.postgresql.org/docs/current/libpq-ssl.html#LIBPQ-SSL-PROTECTION) to connect with Flow. One example is [Neon](https://neon.tech/docs/connect/connect-securely), which requires the setting `verify-full`. Check your managed PostgreSQL's documentation for details if you encounter errors related to the SSL mode configuration. | ||
|
||
### Sample | ||
|
||
A minimal capture definition will look like the following: | ||
|
||
```yaml | ||
captures: | ||
${PREFIX}/${CAPTURE_NAME}: | ||
endpoint: | ||
connector: | ||
image: "ghcr.io/estuary/source-postgres:dev" | ||
config: | ||
address: "localhost:5432" | ||
database: "postgres" | ||
user: "flow_capture" | ||
password: "secret" | ||
bindings: | ||
- resource: | ||
stream: ${TABLE_NAME} | ||
namespace: ${TABLE_NAMESPACE} | ||
syncMode: incremental | ||
target: ${PREFIX}/${COLLECTION_NAME} | ||
``` | ||
Your capture definition will likely be more complex, with additional bindings for each table in the source database. | ||
|
||
[Learn more about capture definitions.](/concepts/captures.md#pull-captures) | ||
|
||
|
||
## TOASTed values | ||
|
||
PostgreSQL has a hard page size limit, usually 8 KB, for performance reasons. | ||
If your tables contain values that exceed the limit, those values can't be stored directly. | ||
PostgreSQL uses [TOAST](https://www.postgresql.org/docs/current/storage-toast.html) (The Oversized-Attribute Storage Technique) to | ||
store them separately. | ||
TOASTed values can sometimes present a challenge for systems that rely on the PostgreSQL write-ahead log (WAL), like this connector. | ||
If a change event occurs on a row that contains a TOASTed value, _but the TOASTed value itself is unchanged_, it is omitted from the WAL. | ||
As a result, the connector emits a row update with the a value omitted, which might cause | ||
unexpected results in downstream catalog tasks if adjustments are not made. | ||
The PostgreSQL connector handles TOASTed values for you when you follow the [standard discovery workflow](/concepts/connectors.md#flowctl-discover) | ||
or use the [Flow UI](/concepts/connectors.md#flow-ui) to create your capture. | ||
It uses [merge](/reference/reduction-strategies/merge.md) [reductions](/concepts/schemas.md#reductions) | ||
to fill in the previous known TOASTed value in cases when that value is omitted from a row update. | ||
However, due to the event-driven nature of certain tasks in Flow, it's still possible to see unexpected results in your data flow, specifically: | ||
|
||
- When you materialize the captured data to another system using a connector that requires [delta updates](/concepts/materialization.md#delta-updates) | ||
- When you perform a [derivation](/concepts/derivations.md) that uses TOASTed values | ||
|
||
### Troubleshooting | ||
|
||
If you encounter an issue that you suspect is due to TOASTed values, try the following: | ||
|
||
- Ensure your collection's schema is using the merge [reduction strategy](/concepts/schemas.md#reduce-annotations). | ||
- [Set REPLICA IDENTITY to FULL](https://www.postgresql.org/docs/9.4/sql-altertable.html) for the table. This circumvents the problem by forcing the | ||
WAL to record all values regardless of size. However, this can have performance impacts on your database and must be carefully evaluated. | ||
- [Contact Estuary support](mailto:[email protected]) for assistance. | ||
## Publications | ||
It is recommended that the publication used by the capture only contain the tables that will be captured. In some cases it may be desirable to create this publication for all tables in the database instead of specific tables, for example using: | ||
```sql | ||
CREATE PUBLICATION flow_publication FOR ALL TABLES WITH (publish_via_partition_root = true); | ||
``` | ||
Caution must be used if creating the publication in this way as all existing tables (even those not part of the capture) will be included in it, and if any of them do not have a primary key they will no longer be able to process updates or deletes. |