diff --git a/changelog/2024-03-24-conduit-0-9-0-release.md b/changelog/2024-03-24-conduit-0-9-0-release.md
index e7646bf3..f901a90b 100644
--- a/changelog/2024-03-24-conduit-0-9-0-release.md
+++ b/changelog/2024-03-24-conduit-0-9-0-release.md
@@ -19,5 +19,5 @@ Revolutionize your data processing with [**Conduit v0.9**](https://github.com/Co
- **Getting Started Guide**: A user-friendly guide is available to help new users set up Conduit and explore the latest features quickly.
:::tip
-For an in-depth look at how the enhanced processors can transform your data processing workflows, check out our [blog post](https://meroxa.com/blog/introducing-conduit-0.9-revolutionizing-data-processing-with-enhanced-processors/), and visit our [Processors documentation page](/docs/processors).
+For an in-depth look at how the enhanced processors can transform your data processing workflows, check out our [blog post](https://meroxa.com/blog/introducing-conduit-0.9-revolutionizing-data-processing-with-enhanced-processors/), and visit our [Processors documentation page](/docs/using/processors/getting-started.
:::
diff --git a/changelog/2024-08-19-conduit-0-11-0-release.md b/changelog/2024-08-19-conduit-0-11-0-release.md
index 20fb6b09..176f1689 100644
--- a/changelog/2024-08-19-conduit-0-11-0-release.md
+++ b/changelog/2024-08-19-conduit-0-11-0-release.md
@@ -16,5 +16,5 @@ We’re thrilled to announce the release of [**Conduit v0.11**](https://github.c
- **Enhanced Transformation Capabilities:** Easily transform data as it flows through your pipelines, making integration smoother and more efficient.
:::tip
-For an in-depth look at how these new features can elevate your data integration processes, check out our [blog post](https://meroxa.com/blog/conduit-v0.11-unveils-powerful-schema-support-for-enhanced-data-integration/), our [Schema Support documentation page](/docs/features/schema-support).
+For an in-depth look at how these new features can elevate your data integration processes, check out our [blog post](https://meroxa.com/blog/conduit-v0.11-unveils-powerful-schema-support-for-enhanced-data-integration/), our [Schema Support documentation page](/docs/using/other-features/schema-support).
:::
diff --git a/changelog/2024-10-10-conduit-0-12-0-release.md b/changelog/2024-10-10-conduit-0-12-0-release.md
index 51ac0f1d..0421cd9e 100644
--- a/changelog/2024-10-10-conduit-0-12-0-release.md
+++ b/changelog/2024-10-10-conduit-0-12-0-release.md
@@ -16,5 +16,5 @@ We’re excited to announce the release of [**Conduit v0.12.0**](https://github.
- **Smart Retry Management:** Limits on retries prevent indefinite restarts, keeping your pipelines efficient and reliable.
:::tip
-For a detailed overview of how Pipeline Recovery works and its benefits, check out our [blog post](https://meroxa.com/blog/unlocking-resilience:-conduit-v0.12.0-introduces-pipeline-recovery/), or our documentation for [Pipeline Recovery](/docs/features/pipeline-recovery) and learn how to make your data streaming experience smoother than ever!
+For a detailed overview of how Pipeline Recovery works and its benefits, check out our [blog post](https://meroxa.com/blog/unlocking-resilience:-conduit-v0.12.0-introduces-pipeline-recovery/), or our documentation for [Pipeline Recovery](/docs/using/other-features/pipeline-recovery) and learn how to make your data streaming experience smoother than ever!
:::
diff --git a/changelog/2024-10-15-pipelines-exit-on-degraded.md b/changelog/2024-10-15-pipelines-exit-on-degraded.md
index 542ed9f4..1920c18c 100644
--- a/changelog/2024-10-15-pipelines-exit-on-degraded.md
+++ b/changelog/2024-10-15-pipelines-exit-on-degraded.md
@@ -19,7 +19,7 @@ $ conduit --help
...
```
-If you were using a [Conduit Configuration file](/docs/features/configuration) this should look like:
+If you were using a [Conduit Configuration file](/docs/configuration#configuration-file) this should look like:
```yaml title="conduit.yaml"
# ...
@@ -28,7 +28,7 @@ pipelines:
# ...
```
-Previously, this functionality was handled by `pipelines.exit-on-error`. However, with the introduction of [Pipeline Recovery](/docs/features/pipeline-recovery), the old description no longer accurately reflected the behavior, as a pipeline may not necessarily exit even in the presence of an error.
+Previously, this functionality was handled by `pipelines.exit-on-error`. However, with the introduction of [Pipeline Recovery](/docs/using/other-features/pipeline-recovery), the old description no longer accurately reflected the behavior, as a pipeline may not necessarily exit even in the presence of an error.
:::warning
The previous flag `pipelines.exit-on-error` will still be valid but is now hidden. We encourage all users to transition to `pipelines.exit-on-degraded` for improved clarity and functionality.
diff --git a/docs/introduction.mdx b/docs/0-what-is/0-introduction.mdx
similarity index 91%
rename from docs/introduction.mdx
rename to docs/0-what-is/0-introduction.mdx
index c5bee49d..bd76d85c 100644
--- a/docs/introduction.mdx
+++ b/docs/0-what-is/0-introduction.mdx
@@ -3,7 +3,7 @@ sidebar_position: 0
hide_title: true
title: 'Introduction'
sidebar_label: "Introduction"
-slug: /
+slug: '/'
---
-Conduit is a data integration tool for software engineers. Its purpose is to
+Conduit is a data integration tool for software engineers, powered by [Meroxa](https://meroxa.io). Its purpose is to
help you move data from A to B. You can use Conduit to send data from Kafka to
Postgres, between files and APIs,
-between [supported connectors](/docs/connectors/connector-list),
-and [any datastore you can build a plugin for](/docs/connectors/building-connectors/).
+between [supported connectors](/docs/using/connectors/list),
+and [any datastore you can build a plugin for](/docs/developing/connectors/).
It's written in [Go](https://go.dev/), compiles to a binary, and is designed to
-be easy to use and [deploy](/docs/getting-started/installing-and-running?option=binary).
+be easy to use and [deploy](/docs/installing-and-running?option=binary).
Out of the box, Conduit comes with:
-- A UI
- Common connectors
- Processors
- Observability
+- Schema Support
-In this getting started guide we'll use a pre-built binary, but Conduit can also be run using [Docker](/docs/getting-started/installing-and-running?option=docker).
+In this getting started guide we'll use a pre-built binary, but Conduit can also be run using [Docker](/docs/installing-and-running?option=docker).
## Some of its features
@@ -49,7 +49,7 @@ allows your data applications to act upon those changes in real-time.
Conduit connectors are plugins that communicate with Conduit via a gRPC
interface. This means that plugins can be written in any language as long as
they conform to the required interface. Check out
-our [connector docs](/docs/connectors)!
+our [connector docs](/docs/using/connectors/getting-started)!
## Installing
@@ -63,7 +63,7 @@ curl https://conduit.io/install.sh | bash
If you're not using macOS or Linux system, you can still install Conduit
following one of the different options provided
-in [our installation page](/docs/getting-started/installing-and-running).
+in [our installation page](/docs/installing-and-running).
## Starting Conduit
Now that we have Conduit installed let's start it up to see what happens.
@@ -116,7 +116,7 @@ Now that we have Conduit up and running you can now navigate to `http://localhos
![Conduit Pipeline](/img/conduit/pipeline.png)
## Building a pipeline
-While you can provision pipelines via Conduit's UI, the recommended way to do so is using a [pipeline configuation file](/docs/pipeline-configuration-files/getting-started).
+While you can provision pipelines via Conduit's UI, the recommended way to do so is using a [pipeline configuation file](/docs/using/pipelines/configuration-file).
For this example we'll create a pipeline that will move data from one file to another.
@@ -267,9 +267,9 @@ Congratulations! You've pushed data through your first Conduit pipeline.
Looking for more examples? Check out the examples in our [repo](https://github.com/ConduitIO/conduit/tree/main/examples).
Now that you've got the basics of running Conduit and creating a pipeline covered. Here are a few places to dive in deeper:
-- [Connectors](/docs/connectors/getting-started)
-- [Pipelines](/docs/pipeline-configuration-files/getting-started)
-- [Processors](/docs/processors/getting-started)
-- [Conduit Architecture](/docs/getting-started/architecture)
+- [Connectors](/docs/using/connectors/getting-started)
+- [Pipelines](/docs/using/pipelines/configuration-file)
+- [Processors](/docs/using/processors/getting-started)
+- [Conduit Architecture](/docs/core-concepts/architecture)
![scarf pixel conduit-site-docs-introduction](https://static.scarf.sh/a.png?x-pxid=01346572-0d57-4df3-8399-1425db913a0a)
\ No newline at end of file
diff --git a/docs/getting-started/architecture.mdx b/docs/0-what-is/1-core-concepts/0-architecture.mdx
similarity index 96%
rename from docs/getting-started/architecture.mdx
rename to docs/0-what-is/1-core-concepts/0-architecture.mdx
index 70f243ce..f28f5ada 100644
--- a/docs/getting-started/architecture.mdx
+++ b/docs/0-what-is/1-core-concepts/0-architecture.mdx
@@ -1,6 +1,6 @@
---
title: "Conduit Architecture"
-sidebar_position: 2
+slug: '/core-concepts/architecture'
---
Here is an overview of the internal Conduit Architecture.
@@ -93,7 +93,7 @@ as soon as possible without draining the pipeline.
This layer is used directly by the [Orchestration layer](#orchestration-layer) and indirectly by the [Core layer](#core-layer), and [Schema registry service](#schema-registry-service) (through stores) to persist data. It provides the functionality of creating transactions and storing, retrieving and deleting arbitrary data like configurations or state.
-More information on [storage](/docs/features/storage).
+More information on [storage](/docs/using/other-features/storage).
## Connector utility services
@@ -101,6 +101,6 @@ More information on [storage](/docs/features/storage).
The schema service is responsible for managing the schema of the records that flow through the pipeline. It provides functionality to infer a schema from a record. The schema is stored in the schema store and can be referenced by connectors and processors. By default, Conduit provides a built-in schema registry, but this service can be run separately from Conduit.
-More information on [Schema Registry](/docs/features/schema-support#schema-registry).
+More information on [Schema Registry](/docs/using/other-features/schema-support#schema-registry).
![scarf pixel conduit-site-docs-introduction](https://static.scarf.sh/a.png?x-pxid=01346572-0d57-4df3-8399-1425db913a0a)
\ No newline at end of file
diff --git a/docs/features/pipeline-semantics.mdx b/docs/0-what-is/1-core-concepts/1-pipeline-semantics.mdx
similarity index 99%
rename from docs/features/pipeline-semantics.mdx
rename to docs/0-what-is/1-core-concepts/1-pipeline-semantics.mdx
index 93850884..b85b78bb 100644
--- a/docs/features/pipeline-semantics.mdx
+++ b/docs/0-what-is/1-core-concepts/1-pipeline-semantics.mdx
@@ -1,6 +1,6 @@
---
title: "Pipeline Semantics"
-sidebar_position: 6
+slug: '/core-concepts/pipeline-semantics'
---
This document describes the inner workings of a Conduit pipeline, its structure, and behavior. It also describes a
diff --git a/docs/0-what-is/1-core-concepts/_category_.json b/docs/0-what-is/1-core-concepts/_category_.json
new file mode 100644
index 00000000..74151c85
--- /dev/null
+++ b/docs/0-what-is/1-core-concepts/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Core concepts"
+}
diff --git a/docs/0-what-is/1-core-concepts/index.mdx b/docs/0-what-is/1-core-concepts/index.mdx
new file mode 100644
index 00000000..e937516a
--- /dev/null
+++ b/docs/0-what-is/1-core-concepts/index.mdx
@@ -0,0 +1,38 @@
+---
+title: "Core concepts"
+slug: '/core-concepts'
+---
+
+## Pipeline
+
+A pipeline receives records from one or multiple source connectors, pushes them through zero
+or multiple processors until they reach one or multiple destination connectors.
+
+## Connector
+
+A connector is the internal entity that communicates with a connector plugin and either pushes
+records from the plugin into the pipeline (source connector) or the other way around
+(destination connector).
+
+## Connector plugin
+
+Sometimes also referred to as "plugin", is an external process which communicates with Conduit
+and knows how to read/write records from/to a data source/destination (e.g. a database).
+
+## Processor
+
+A component that executes an operation on a single record that flows through the pipeline.
+It can either change the record or filter it out based on some criteria.
+
+## OpenCDC Record
+
+A record represents a single piece of data that flows through a pipeline (e.g. one database row).
+[More info here](/docs/using/opencdc-record).
+
+## Collection
+
+A generic term used in Conduit to describe an entity in a 3rd party system from which records
+are read from or to which records they are written to. Examples are: topics (in Kafka), tables
+(in a database), indexes (in a search engine), collections (in NoSQL databases), etc.
+
+![scarf pixel conduit-site-docs-introduction](https://static.scarf.sh/a.png?x-pxid=01346572-0d57-4df3-8399-1425db913a0a)
\ No newline at end of file
diff --git a/docs/pipeline-configuration-files/getting-started.mdx b/docs/0-what-is/1-getting-started.mdx
similarity index 94%
rename from docs/pipeline-configuration-files/getting-started.mdx
rename to docs/0-what-is/1-getting-started.mdx
index 1fae173c..ffebf70f 100644
--- a/docs/pipeline-configuration-files/getting-started.mdx
+++ b/docs/0-what-is/1-getting-started.mdx
@@ -1,7 +1,8 @@
---
-title: 'Getting Started with Pipeline Configuration Files'
+title: 'Getting Started'
sidebar_label: "Getting Started"
sidebar_position: 0
+slug: '/getting-started'
---
Pipeline configuration files give you the ability to define pipelines that are
@@ -13,7 +14,7 @@ configurations.
:::tip
-In our [Conduit repository](https://github.com/ConduitIO/conduit), you can find [more examples](https://github.com/ConduitIO/conduit/tree/main/examples/pipelines), but to ilustrate a simple use case we'll show a pipeline using a file as a source, and another file as a destination. Check out the different [specifications](/docs/pipeline-configuration-files/specifications) to see the different configuration options.
+In our [Conduit repository](https://github.com/ConduitIO/conduit), you can find [more examples](https://github.com/ConduitIO/conduit/tree/main/examples/pipelines), but to ilustrate a simple use case we'll show a pipeline using a file as a source, and another file as a destination. Check out the different [specifications](/docs/using/pipelines/configuration-file) to see the different configuration options.
:::
diff --git a/docs/getting-started/installing-and-running.mdx b/docs/1-using/0-installing-and-running.mdx
similarity index 95%
rename from docs/getting-started/installing-and-running.mdx
rename to docs/1-using/0-installing-and-running.mdx
index 874ee83f..973a9ece 100644
--- a/docs/getting-started/installing-and-running.mdx
+++ b/docs/1-using/0-installing-and-running.mdx
@@ -1,7 +1,7 @@
---
title: "Installing and running"
-sidebar_position: 0
hide_table_of_contents: true
+slug: '/installing-and-running'
---
import Tabs from '@theme/Tabs';
@@ -155,11 +155,11 @@ You should now be able to interact with the Conduit UI and HTTP API on port 8080
## Next Steps
Now that you have Conduit installed you can
-learn [how to build a pipeline](/docs/how-to/build-generator-to-log-pipeline).
+learn [how to get started]÷(/docs/getting-started).
You can also explore some other topics, such as:
-- [Pipelines](/docs/pipeline-configuration-files/getting-started)
-- [Connectors](/docs/connectors/getting-started)
-- [Processors](/docs/processors/getting-started)
+- [Pipelines](/docs/using/pipelines/configuration-file)
+- [Connectors](/docs/using/connectors/getting-started)
+- [Processors](/docs/using/processors/getting-started
![scarf pixel conduit-site-docs-running](https://static.scarf.sh/a.png?x-pxid=db6468a8-7998-463e-800f-58a619edd9b3)
diff --git a/docs/1-using/1-configuration.mdx b/docs/1-using/1-configuration.mdx
new file mode 100644
index 00000000..92ef0dd8
--- /dev/null
+++ b/docs/1-using/1-configuration.mdx
@@ -0,0 +1,94 @@
+---
+title: 'How to configure Conduit'
+sidebar_label: 'Configuration'
+slug: '/configuration'
+---
+
+Conduit accepts CLI flags, environment variables and a configuration file to
+configure its behavior. Each CLI flag has a corresponding environment variable
+and a corresponding field in the configuration file. Conduit uses the value for
+each configuration option based on the following priorities:
+
+## CLI flags
+
+ **CLI flags** (highest priority) - if a CLI flag is provided it will always be
+ respected, regardless of the environment variable or configuration file. To
+ see a full list of available flags run `conduit --help`:
+
+
+```bash
+$ conduit --help
+Usage of conduit:
+ -api.enabled
+ enable HTTP and gRPC API (default true)
+ -config string
+ global config file (default "conduit.yaml")
+ -connectors.path string
+ path to standalone connectors' directory (default "./connectors")
+ -db.badger.path string
+ path to badger DB (default "conduit.db")
+ -db.postgres.connection-string string
+ postgres connection string, may be a database URL or in PostgreSQL keyword/value format
+ -db.postgres.table string
+ postgres table in which to store data (will be created if it does not exist) (default "conduit_kv_store")
+ -db.sqlite.path string
+ path to sqlite3 DB (default "conduit.db")
+ -db.sqlite.table string
+ sqlite3 table in which to store data (will be created if it does not exist) (default "conduit_kv_store")
+ -db.type string
+ database type; accepts badger,postgres,inmemory,sqlite (default "badger")
+ -grpc.address string
+ address for serving the gRPC API (default ":8084")
+ -http.address string
+ address for serving the HTTP API (default ":8080")
+ -log.format string
+ sets the format of the logging; accepts json, cli (default "cli")
+ -log.level string
+ sets logging level; accepts debug, info, warn, error, trace (default "info")
+ -pipelines.error-recovery.backoff-factor int
+ backoff factor applied to the last delay (default 2)
+ -pipelines.error-recovery.max-delay duration
+ maximum delay before restart (default 10m0s)
+ -pipelines.error-recovery.max-retries int
+ maximum number of retries (default -1)
+ -pipelines.error-recovery.max-retries-window duration
+ amount of time running without any errors after which a pipeline is considered healthy (default 5m0s)
+ -pipelines.error-recovery.min-delay duration
+ minimum delay before restart (default 1s)
+ -pipelines.exit-on-degraded
+ exit Conduit if a pipeline enters a degraded state
+ -pipelines.path string
+ path to the directory that has the yaml pipeline configuration files, or a single pipeline configuration file (default "./pipelines")
+ -processors.path string
+ path to standalone processors' directory (default "./processors")
+ -schema-registry.confluent.connection-string string
+ confluent schema registry connection string
+ -schema-registry.type string
+ schema registry type; accepts builtin,confluent (default "builtin")
+ -version
+ prints current Conduit version
+```
+
+## Environment variables
+
+**Environment variables** (lower priority) - an environment variable is only
+ used if no CLI flag is provided for the same option. Environment variables
+ have the prefix `CONDUIT` and contain underscores instead of dots and
+ hyphens (e.g. the flag `-db.postgres.connection-string` corresponds
+ to `CONDUIT_DB_POSTGRES_CONNECTION_STRING`).
+
+## Configuration file
+
+**Configuration file** (lowest priority) - Conduit by default loads the
+ file `conduit.yaml` placed in the same folder as Conduit. The path to the file
+ can be customized using the CLI flag `-config`. It is not required to provide
+ a configuration file and any value in the configuration file can be overridden
+ by an environment variable or a flag. The file content should be a YAML
+ document where keys can be hierarchically split on `.`. For example:
+
+ ```yaml
+ db:
+ type: postgres # corresponds to flag -db.type and env variable CONDUIT_DB_TYPE
+ postgres:
+ connection-string: postgres://localhost:5432/conduitdb # -db.postgres.connection-string or CONDUIT_DB_POSTGRES_CONNECTION_STRING
+ ```
diff --git a/docs/features/opencdc-record.mdx b/docs/1-using/2-opencdc-record.mdx
similarity index 98%
rename from docs/features/opencdc-record.mdx
rename to docs/1-using/2-opencdc-record.mdx
index 5d9d6a37..dcbcc8a8 100644
--- a/docs/features/opencdc-record.mdx
+++ b/docs/1-using/2-opencdc-record.mdx
@@ -1,6 +1,5 @@
---
title: 'OpenCDC record'
-sidebar_position: 4
---
An OpenCDC record in Conduit aims to standardize the format of data records
@@ -130,7 +129,7 @@ the `opencdc.StructuredData` type.
The supported data types for values in `opencdc.StructuredData` depend on following:
- connector or processor type (built-in or standalone)
-- [schema support](/docs/features/schema-support) (enabled or disabled).
+- [schema support](/docs/using/other-features/schema-support) (enabled or disabled).
In built-in connectors, the field values can be of any Go type, given that
there's no (de)serialization involved.
@@ -357,7 +356,7 @@ The version of the destination plugin that has written the record.
```
### `conduit.dlq.nack.error`
-Contains the error that caused a record to be nacked and pushed to the [dead-letter queue (DLQ)](/docs/features/dead-letter-queue).
+Contains the error that caused a record to be nacked and pushed to the [dead-letter queue (DLQ)](/docs/using/other-features/dead-letter-queue).
### `conduit.dlq.nack.node.id`
The ID of the internal node that nacked the record.
diff --git a/docs/pipeline-configuration-files/specifications.mdx b/docs/1-using/3-pipelines/0-configuration-file.mdx
similarity index 94%
rename from docs/pipeline-configuration-files/specifications.mdx
rename to docs/1-using/3-pipelines/0-configuration-file.mdx
index ff46bf8f..5ce99c99 100644
--- a/docs/pipeline-configuration-files/specifications.mdx
+++ b/docs/1-using/3-pipelines/0-configuration-file.mdx
@@ -1,7 +1,7 @@
---
-title: 'Specifications'
+title: 'Pipeline Configuration File'
+sidebar_label: 'Configuration File'
toc_max_heading_level: 6
-sidebar_position: 2
---
:::info
@@ -125,7 +125,7 @@ start from the beginning.
- **Allowed Values**: Strings with a length limit of *128* characters.
- **Description**: Human readable name for the pipeline. Needs to be unique
across all pipelines (it is used as a label in pipeline
- [metrics](/docs/features/metrics)).
+ [metrics](/docs/using/other-features/metrics)).
### description
@@ -161,7 +161,7 @@ start from the beginning.
- **Required**: No
- **Default**: See sub-fields
- **Description**: This node contains the dead-letter queue configuration. Read
- more about [dead-letter queues](/docs/features/dead-letter-queue) in Conduit.
+ more about [dead-letter queues](/docs/using/other-features/dead-letter-queue) in Conduit.
#### plugin
@@ -170,7 +170,7 @@ start from the beginning.
- **Default**: `builtin:log`
- **Description**: This node references the destination connector plugin used
for storing dead-letters. See how to
- [reference a connector](/docs/connectors/referencing).
+ [reference a connector](/docs/using/connectors/referencing).
#### settings
@@ -188,7 +188,7 @@ start from the beginning.
- **Required**: No
- **Default**: `1`
- **Description**: Defines the nack window size. See
- [dead-letter queue](/docs/features/dead-letter-queue#nack-window-and-threshold).
+ [dead-letter queue](/docs/using/other-features/dead-letter-queue#nack-window-and-threshold).
#### window-nack-threshold
@@ -196,7 +196,7 @@ start from the beginning.
- **Required**: No
- **Default**: `0`
- **Description**: Defines the nack window threshold. See
- [dead-letter queue](/docs/features/dead-letter-queue#nack-window-and-threshold).
+ [dead-letter queue](/docs/using/other-features/dead-letter-queue#nack-window-and-threshold).
## connector
@@ -238,7 +238,7 @@ start from the beginning.
- **Required**: Yes
- **Default**: None
- **Description**: This node references the connector plugin. See how to
- [reference a connector](/docs/connectors/referencing).
+ [reference a connector](/docs/using/connectors/referencing).
**Warning**: Changing this property will cause the connector to start from the
beginning.
@@ -293,7 +293,7 @@ beginning.
- **Required**: Yes
- **Default**: None
- **Description**: Defines the processor's plugin name (e.g. `field.set`). Check out the
- [processors documentation](/docs/processors/builtin) to find the list of builtin processors
+ [processors documentation](/docs/using/processors/builtin) to find the list of builtin processors
we provide.
### condition
diff --git a/docs/pipeline-configuration-files/provisioning.mdx b/docs/1-using/3-pipelines/1-provisioning.mdx
similarity index 91%
rename from docs/pipeline-configuration-files/provisioning.mdx
rename to docs/1-using/3-pipelines/1-provisioning.mdx
index b867186c..e4c9aab6 100644
--- a/docs/pipeline-configuration-files/provisioning.mdx
+++ b/docs/1-using/3-pipelines/1-provisioning.mdx
@@ -1,11 +1,10 @@
---
title: 'Provisioning'
-sidebar_position: 1
---
This document describes how provisioning of pipeline configuration files works
in Conduit. To see how a pipeline configuration file is structured check the
-[specifications](/docs/pipeline-configuration-files/specifications).
+[specifications](/docs/using/pipelines/configuration-file).
## Conduit provisions pipelines at startup
@@ -91,14 +90,14 @@ beginning, as if it's running for the first time.
Here is a full list of fields that will cause the connector to start from the
beginning if they are updated:
-- [`pipeline.id`](/docs/pipeline-configuration-files/specifications#id) - The
+- [`pipeline.id`](/docs/using/pipelines/configuration-file#id) - The
entire pipeline will be recreated and all source connectors will start from
the beginning.
-- [`connector.id`](/docs/pipeline-configuration-files/specifications#id-1) - The
+- [`connector.id`](/docs/using/pipelines/configuration-file#id-1) - The
updated connector will start from the beginning (only source connectors).
-- [`connector.type`](/docs/pipeline-configuration-files/specifications#type) -
+- [`connector.type`](/docs/using/pipelines/configuration-file#type) -
The updated connector will start from the beginning (only source connectors).
-- [`connector.plugin`](/docs/pipeline-configuration-files/specifications#plugin-1) -
+- [`connector.plugin`](/docs/using/pipelines/configuration-file#plugin-1) -
The updated connector will start from the beginning (only source connectors).
## Deleting a provisioned pipeline
diff --git a/docs/1-using/3-pipelines/2-statuses.mdx b/docs/1-using/3-pipelines/2-statuses.mdx
new file mode 100644
index 00000000..25bc0066
--- /dev/null
+++ b/docs/1-using/3-pipelines/2-statuses.mdx
@@ -0,0 +1,32 @@
+---
+title: 'Statuses'
+---
+
+Understanding the different statuses of your pipeline is crucial for effective monitoring and management. Below are the various statuses that a pipeline can have in Conduit.
+
+## Statuses Overview
+
+| Status | Description |
+|------------------------|-----------------------------------------------------------------------------|
+| **Running** | The pipeline is actively processing data and functioning as expected. |
+| **Stopped** | The pipeline has been stopped by the system or manually by a user shutting down gracefully. |
+| **Degraded** | The pipeline has been stopped due to an error or force-stopped by a user. |
+| **Recovering** | The pipeline is in the process of recovering from a degraded state or error. |
+
+## Status Descriptions
+
+### Running
+- **Definition**: The pipeline is currently active and processing data without any issues.
+- **Implication**: All systems are functioning normally, and data is flowing as expected.
+
+### Stopped
+- **Definition**: The pipeline has been halted by the system, or manually stopped by a user.
+- **Implication**: The pipeline gracefully stopped, any in-flight records were flushed, acks were delivered back to the source, no error was encountered while stopping the pipeline. The pipeline could be restarted.
+
+### Degraded
+- **Definition**: The pipeline stopped due to an error that couldn't be recovered, or force-stopped by the user.
+- **Implication**: Any in-flight records were dropped, not all acks might have been delivered back to the source. Starting a degraded pipeline again could potentially result in duplicated data (depending on how the destination connector handles records), since some data might get re-delivered.
+
+### Recovering
+- **Definition**: The pipeline is in the process of recovering from a degraded state or error.
+- **Implication**: The system is attempting to restore normal operations. Monitor the status to ensure recovery is successful.
diff --git a/docs/1-using/3-pipelines/_category_.json b/docs/1-using/3-pipelines/_category_.json
new file mode 100644
index 00000000..da65675f
--- /dev/null
+++ b/docs/1-using/3-pipelines/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Pipelines"
+}
diff --git a/docs/connectors/getting-started.mdx b/docs/1-using/4-connectors/0-getting-started.mdx
similarity index 92%
rename from docs/connectors/getting-started.mdx
rename to docs/1-using/4-connectors/0-getting-started.mdx
index 81812aaa..b4b31008 100644
--- a/docs/connectors/getting-started.mdx
+++ b/docs/1-using/4-connectors/0-getting-started.mdx
@@ -1,6 +1,5 @@
---
title: 'Getting Started with Connectors'
-sidebar_position: 0
sidebar_label: "Getting Started"
---
@@ -32,8 +31,8 @@ Conduit ships with a number of built-in connectors:
Besides these connectors there is a number of standalone connectors that can be
added to Conduit as plugins (find the complete
-list [here](/docs/connectors/connector-list)).
+list [here](/docs/using/connectors/list)).
-Have a look at how to [install a connector](/docs/connectors/installing) next!
+Have a look at how to [install a connector](/docs/using/connectors/installing next!
![scarf pixel conduit-site-docs-connectors](https://static.scarf.sh/a.png?x-pxid=2fa824d7-fd94-4cf9-a5c8-ea63c9860213)
\ No newline at end of file
diff --git a/docs/connectors/installing.mdx b/docs/1-using/4-connectors/1-installing.mdx
similarity index 90%
rename from docs/connectors/installing.mdx
rename to docs/1-using/4-connectors/1-installing.mdx
index 8588cc11..1235af72 100644
--- a/docs/connectors/installing.mdx
+++ b/docs/1-using/4-connectors/1-installing.mdx
@@ -1,10 +1,9 @@
---
title: "Installing Connectors"
-sidebar_position: 1
---
Beside the built-in connectors shipped with Conduit there is a
-[list](/docs/connectors/connector-list) of
+[list](/docs/using/connectors/list) of
connectors that can be added to Conduit as plugins. These are called standalone
connectors.
@@ -31,6 +30,6 @@ be adjusted using the CLI flag `-connectors.path`, for example:
Names of the connector binaries are not important, since Conduit is getting the
information about connectors from connectors themselves (using their gRPC API).
-Find out how to [reference your connector](/docs/connectors/referencing).
+Find out how to [reference your connector](/docs/using/connectors/referencing).
![scarf pixel conduit-site-docs-connectors](https://static.scarf.sh/a.png?x-pxid=2fa824d7-fd94-4cf9-a5c8-ea63c9860213)
\ No newline at end of file
diff --git a/docs/connectors/referencing.mdx b/docs/1-using/4-connectors/2-referencing.mdx
similarity index 98%
rename from docs/connectors/referencing.mdx
rename to docs/1-using/4-connectors/2-referencing.mdx
index fa116510..d485746b 100644
--- a/docs/connectors/referencing.mdx
+++ b/docs/1-using/4-connectors/2-referencing.mdx
@@ -1,6 +1,5 @@
---
title: "Referencing Connectors"
-sidebar_position: 2
---
The name, used to reference a connector plugin in API requests or a pipeline
diff --git a/docs/connectors/connector-list.mdx b/docs/1-using/4-connectors/3-list.mdx
similarity index 93%
rename from docs/connectors/connector-list.mdx
rename to docs/1-using/4-connectors/3-list.mdx
index 7c17eb6a..bd95d43b 100644
--- a/docs/connectors/connector-list.mdx
+++ b/docs/1-using/4-connectors/3-list.mdx
@@ -1,6 +1,5 @@
---
title: "Connector List"
-sidebar_position: 8
---
The Conduit team and our community of developers are always adding new connectors.
@@ -10,7 +9,7 @@ and [destination](https://github.com/ConduitIO/conduit/issues?q=is%3Aissue+label
connector lists.
![scarf pixel conduit-site-docs-connectors](https://static.scarf.sh/a.png?x-pxid=2fa824d7-fd94-4cf9-a5c8-ea63c9860213)
-Don't have time to wait? You can get started [building your own](/docs/connectors/building-connectors/) in no time.
+Don't have time to wait? You can get started [building your own](/docs/developing/connectors/) in no time.
### Built-in vs Standalone
@@ -18,7 +17,7 @@ A Conduit connector can run in one of two ways: _built-in_ or _standalone_.
Built-in refers to connectors that are compiled into the Conduit binary, while
standalone refers to connectors that run separately from Conduit. You can learn
more about standalone vs built-in connectors on our
-[Connector Behavior](/docs/connectors/behavior) page. A small set of connectors
+[Connector Behavior](/docs/developing/connectors/behavior) page. A small set of connectors
are built into Conduit by default. For those connectors no additional setup is
required and you can start using them in Conduit right away.
diff --git a/docs/connectors/kafka-connect-connector.mdx b/docs/1-using/4-connectors/4-kafka-connect-connector.mdx
similarity index 97%
rename from docs/connectors/kafka-connect-connector.mdx
rename to docs/1-using/4-connectors/4-kafka-connect-connector.mdx
index 23c2ef4a..40d3bfa7 100644
--- a/docs/connectors/kafka-connect-connector.mdx
+++ b/docs/1-using/4-connectors/4-kafka-connect-connector.mdx
@@ -1,6 +1,5 @@
---
title: "Kafka Connect Connectors with Conduit"
-sidebar_position: 9
---
# Using Kafka Connect Connectors with Conduit
@@ -82,7 +81,7 @@ Now that the Kafka Connect connectors included in `lib`, we can use it in a pipe
1. [Install Conduit](https://github.com/ConduitIO/conduit#installation-guide).
2. Create a pipeline configuration file: Create a folder called `pipelines` at the same level as your Conduit
-binary. Inside of that folder create a file named `jdbc-to-file.yml`, check [Specifications](https://conduit.io/docs/pipeline-configuration-files/specifications)
+binary. Inside of that folder create a file named `jdbc-to-file.yml`, check [Specifications](https://conduit.io/docs/using/pipelines/configuration-file)
for more details about Pipeline Configuration Files.
````yaml
diff --git a/docs/connectors/additional-built-in-plugins.mdx b/docs/1-using/4-connectors/5-additional-built-in-plugins.mdx
similarity index 99%
rename from docs/connectors/additional-built-in-plugins.mdx
rename to docs/1-using/4-connectors/5-additional-built-in-plugins.mdx
index 9d57f4bc..89b9463c 100644
--- a/docs/connectors/additional-built-in-plugins.mdx
+++ b/docs/1-using/4-connectors/5-additional-built-in-plugins.mdx
@@ -1,6 +1,5 @@
---
title: "Adding built-in Connectors"
-sidebar_position: 4
---
Built-in connectors offer better performance when compared to standalone ones,
diff --git a/docs/connectors/configuration-parameters/output-format.mdx b/docs/1-using/4-connectors/6-configuration-parameters/0-output-format.mdx
similarity index 97%
rename from docs/connectors/configuration-parameters/output-format.mdx
rename to docs/1-using/4-connectors/6-configuration-parameters/0-output-format.mdx
index 67092e78..dfdafd1c 100644
--- a/docs/connectors/configuration-parameters/output-format.mdx
+++ b/docs/1-using/4-connectors/6-configuration-parameters/0-output-format.mdx
@@ -1,11 +1,10 @@
---
title: "Output Format"
-sidebar_position: 1
---
One of the challenges to be solved when integrating Conduit with other systems, such as Kafka Connect and Debezium, is
the data format. This is present in situations where raw data needs to be written, for example when writing messages to
-Kafka. By default, Conduit uses the [OpenCDC format](/docs/features/opencdc-record). Conduit also makes it possible to
+Kafka. By default, Conduit uses the [OpenCDC format](/docs/using/opencdc-record). Conduit also makes it possible to
change the output format so that the data can be consumed by other systems.
:::note
diff --git a/docs/connectors/configuration-parameters/rate-limiting.mdx b/docs/1-using/4-connectors/6-configuration-parameters/1-rate-limiting.mdx
similarity index 98%
rename from docs/connectors/configuration-parameters/rate-limiting.mdx
rename to docs/1-using/4-connectors/6-configuration-parameters/1-rate-limiting.mdx
index a821aa54..bb3f0c6a 100644
--- a/docs/connectors/configuration-parameters/rate-limiting.mdx
+++ b/docs/1-using/4-connectors/6-configuration-parameters/1-rate-limiting.mdx
@@ -1,6 +1,5 @@
---
title: "Rate Limiting"
-sidebar_position: 2
---
Destination connectors can be configured to limit the rate at which records can be written. This is especially useful when the destination resource has a rate limit to ensure that the connector does not exceed it. By default, Conduit does not limit the rate at which records are written.
diff --git a/docs/connectors/configuration-parameters/schema-extraction.mdx b/docs/1-using/4-connectors/6-configuration-parameters/2-schema-extraction.mdx
similarity index 95%
rename from docs/connectors/configuration-parameters/schema-extraction.mdx
rename to docs/1-using/4-connectors/6-configuration-parameters/2-schema-extraction.mdx
index 43841950..fe7fb2b8 100644
--- a/docs/connectors/configuration-parameters/schema-extraction.mdx
+++ b/docs/1-using/4-connectors/6-configuration-parameters/2-schema-extraction.mdx
@@ -1,6 +1,5 @@
---
title: "Schema Extraction"
-sidebar_position: 3
---
Source and destination connectors can be configured to automatically extract the
@@ -29,7 +28,7 @@ connectors):
:::caution
`sdk.schema.extract.payload.enabled` and `sdk.schema.extract.key.enabled` should be set to `false` when producing raw (not structured) data, as shown in the example below.
-If you are developing a connector, you can disable this automatically by updating the connector's default middleware. For more information about `NewSource()` when developing a source connector, see [here](/docs/connectors/building-connectors/developing-source-connectors/#newsource).
+If you are developing a connector, you can disable this automatically by updating the connector's default middleware. For more information about `NewSource()` when developing a source connector, see [here](/docs/developing/connectors/developing-source-connectors/#newsource).
:::
## Example
@@ -100,6 +99,6 @@ something below in the record's metadata:
:::tip
-To learn more about **Schema Support**, check out [this page](/docs/features/schema-support).
+To learn more about **Schema Support**, check out [this page](/docs/using/other-features/schema-support).
:::
diff --git a/docs/connectors/configuration-parameters/batching.mdx b/docs/1-using/4-connectors/6-configuration-parameters/3-batching.mdx
similarity index 99%
rename from docs/connectors/configuration-parameters/batching.mdx
rename to docs/1-using/4-connectors/6-configuration-parameters/3-batching.mdx
index 00a42e1b..7c1afb03 100644
--- a/docs/connectors/configuration-parameters/batching.mdx
+++ b/docs/1-using/4-connectors/6-configuration-parameters/3-batching.mdx
@@ -1,6 +1,5 @@
---
title: "Batching"
-sidebar_position: 0
---
Destination connectors can be configured to process records in batches. This is especially useful when the destination
diff --git a/docs/connectors/configuration-parameters/_category_.json b/docs/1-using/4-connectors/6-configuration-parameters/_category_.json
similarity index 72%
rename from docs/connectors/configuration-parameters/_category_.json
rename to docs/1-using/4-connectors/6-configuration-parameters/_category_.json
index 74e4493a..b4d499f3 100644
--- a/docs/connectors/configuration-parameters/_category_.json
+++ b/docs/1-using/4-connectors/6-configuration-parameters/_category_.json
@@ -1,4 +1,3 @@
{
"label": "Configuration parameters",
- "position": 3
}
diff --git a/docs/connectors/configuration-parameters/configuration-parameters.mdx b/docs/1-using/4-connectors/6-configuration-parameters/index.mdx
similarity index 89%
rename from docs/connectors/configuration-parameters/configuration-parameters.mdx
rename to docs/1-using/4-connectors/6-configuration-parameters/index.mdx
index 7131119f..d4609145 100644
--- a/docs/connectors/configuration-parameters/configuration-parameters.mdx
+++ b/docs/1-using/4-connectors/6-configuration-parameters/index.mdx
@@ -1,6 +1,5 @@
---
title: 'Configuration Parameters'
-sidebar_position: 0
---
import DocCardList from '@theme/DocCardList';
diff --git a/docs/1-using/4-connectors/_category_.json b/docs/1-using/4-connectors/_category_.json
new file mode 100644
index 00000000..f0a72a8c
--- /dev/null
+++ b/docs/1-using/4-connectors/_category_.json
@@ -0,0 +1,3 @@
+{
+ "label": "Connectors"
+}
diff --git a/docs/processors/getting-started.mdx b/docs/1-using/5-processors/0-getting-started.mdx
similarity index 72%
rename from docs/processors/getting-started.mdx
rename to docs/1-using/5-processors/0-getting-started.mdx
index fa4d13e6..463d2d0f 100644
--- a/docs/processors/getting-started.mdx
+++ b/docs/1-using/5-processors/0-getting-started.mdx
@@ -1,11 +1,10 @@
---
title: 'Getting Started with Processors'
sidebar_label: 'Getting Started'
-sidebar_position: 0
---
A processor is a component that operates on a single record that flows through a pipeline. It can either **transform** the record, or **filter** it out based on some criteria. Since they are part of pipelines, making
-yourself familiar with [pipeline semantics](/docs/features/pipeline-semantics) is highly recommended.
+yourself familiar with [pipeline semantics](/docs/core-concepts/pipeline-semantics) is highly recommended.
![Pipeline](/img/pipeline_example.svg)
@@ -26,21 +25,21 @@ attached to a single parent, which can be either a connector or a pipeline:
When it comes to using a processor, Conduit supports different types:
-- [Built-in processors](/docs/processors/builtin) will perform the most common operations you could expect such as filtering fields, replacing fields, posting payloads to a HTTP endpoint, etc. These are already coming as part of Conduit, and you can simply start using them with a bit of configuration. [Check out this document to see everything that's available](/docs/processors/builtin).
-- [Standalone processors](/docs/processors/standalone) are the ones you could write yourself to do anything that's not already covered by the [Built-in](/docs/processors/builtin) ones. [Here's](/docs/processors/standalone) more information about them.
+- [Built-in processors](/docs/using/processors/builtin) will perform the most common operations you could expect such as filtering fields, replacing fields, posting payloads to a HTTP endpoint, etc. These are already coming as part of Conduit, and you can simply start using them with a bit of configuration. [Check out this document to see everything that's available](/docs/using/processors/builtin).
+- [Standalone processors](/docs/developing/processors) are the ones you could write yourself to do anything that's not already covered by the [Built-in](/docs/using/processors/builtin) ones. [Here's](/docs/developing/processors) more information about them.
## How to use a processor
-In these following examples, we're using the [`json.decode`](/docs/processors/builtin/json.decode), but you could use any other you'd like from our [Built-in](/docs/processors/builtin/) ones, or even [reference](/docs/processors/referencing) your own [Standalone processor](/docs/processors/standalone).
+In these following examples, we're using the [`json.decode`](/docs/using/processors/builtin/json.decode), but you could use any other you'd like from our [Built-in](/docs/using/processors/builtin/) ones, or even [reference](/docs/using/processors/referencing) your own [Standalone processor](/docs/developing/processors).
:::info
-When referencing the name of a processor plugin there are different ways you can make sure you're using the one you'd like. Please, check out the [Referencing Processors](/docs/processors/referencing) documentation for more information.
+When referencing the name of a processor plugin there are different ways you can make sure you're using the one you'd like. Please, check out the [Referencing Processors](/docs/using/processors/referencing) documentation for more information.
:::
-### Using a [pipeline configuration file](/docs/pipeline-configuration-files/getting-started)
+### Using a [pipeline configuration file](/docs/using/pipelines/configuration-file)
#### Using a pipeline processor
@@ -79,7 +78,7 @@ pipelines:
# other connectors
```
-The documentation about how to configure processors in pipeline configuration files can be found [here](/docs/pipeline-configuration-files/specifications#processor).
+The documentation about how to configure processors in pipeline configuration files can be found [here](/docs/using/pipelines/configuration-file#processor).
### Using the [HTTP API](/api#get-/v1/processors)
diff --git a/docs/processors/builtin/avro.decode.mdx b/docs/1-using/5-processors/1-builtin/avro.decode.mdx
similarity index 95%
rename from docs/processors/builtin/avro.decode.mdx
rename to docs/1-using/5-processors/1-builtin/avro.decode.mdx
index 2af83125..f11a863a 100644
--- a/docs/processors/builtin/avro.decode.mdx
+++ b/docs/1-using/5-processors/1-builtin/avro.decode.mdx
@@ -24,7 +24,7 @@ and decodes the payload. The schema is cached locally after it's first downloade
If the processor encounters structured data or the data can't be decoded it returns an error.
-This processor is the counterpart to [`avro.encode`](/docs/processors/builtin/avro.encode).
+This processor is the counterpart to [`avro.encode`](/docs/using/processors/builtin/avro.encode).
## Configuration parameters
@@ -43,7 +43,7 @@ pipelines:
settings:
# The field that will be decoded.
# For more information about the format, see [Referencing
- # fields](https://conduit.io/docs/processors/referencing-fields).
+ # fields](https://conduit.io/docs/using/processors/referencing-fields.
# Type: string
field: ".Payload.After"
# Whether to decode the record key using its corresponding schema from
@@ -79,7 +79,7 @@ pipelines:
The field that will be decoded.
-For more information about the format, see [Referencing fields](https://conduit.io/docs/processors/referencing-fields).
+For more information about the format, see [Referencing fields](https://conduit.io/docs/using/processors/referencing-fields.
diff --git a/docs/processors/builtin/avro.encode.mdx b/docs/1-using/5-processors/1-builtin/avro.encode.mdx
similarity index 98%
rename from docs/processors/builtin/avro.encode.mdx
rename to docs/1-using/5-processors/1-builtin/avro.encode.mdx
index fa8f6eb2..d160746c 100644
--- a/docs/processors/builtin/avro.encode.mdx
+++ b/docs/1-using/5-processors/1-builtin/avro.encode.mdx
@@ -37,7 +37,7 @@ It provides two strategies for determining the schema:
checks need to be disabled for this schema to prevent failures. If the schema subject does not exist before running
this processor, it will automatically set the correct compatibility settings in the schema registry.
-This processor is the counterpart to [`avro.decode`](/docs/processors/builtin/avro.decode).
+This processor is the counterpart to [`avro.decode`](/docs/using/processors/builtin/avro.decode).
## Configuration parameters
@@ -56,7 +56,7 @@ pipelines:
settings:
# The field that will be encoded.
# For more information about the format, see [Referencing
- # fields](https://conduit.io/docs/processors/referencing-fields).
+ # fields](https://conduit.io/docs/using/processors/referencing-fields.
# Type: string
field: ".Payload.After"
# The subject name under which the inferred schema will be registered
@@ -116,7 +116,7 @@ pipelines:
The field that will be encoded.
-For more information about the format, see [Referencing fields](https://conduit.io/docs/processors/referencing-fields).
+For more information about the format, see [Referencing fields](https://conduit.io/docs/using/processors/referencing-fields.
diff --git a/docs/processors/builtin/base64.decode.mdx b/docs/1-using/5-processors/1-builtin/base64.decode.mdx
similarity index 95%
rename from docs/processors/builtin/base64.decode.mdx
rename to docs/1-using/5-processors/1-builtin/base64.decode.mdx
index 8b53284d..29582d0c 100644
--- a/docs/processors/builtin/base64.decode.mdx
+++ b/docs/1-using/5-processors/1-builtin/base64.decode.mdx
@@ -38,7 +38,7 @@ pipelines:
# Field is the reference to the target field. Note that it is not
# allowed to base64 decode the `.Position` field.
# For more information about the format, see [Referencing
- # fields](https://conduit.io/docs/processors/referencing-fields).
+ # fields](https://conduit.io/docs/using/processors/referencing-fields.
# Type: string
field: ""
# Whether to decode the record key using its corresponding schema from
@@ -75,7 +75,7 @@ pipelines:
Field is the reference to the target field. Note that it is not allowed to
base64 decode the `.Position` field.
-For more information about the format, see [Referencing fields](https://conduit.io/docs/processors/referencing-fields).
+For more information about the format, see [Referencing fields](https://conduit.io/docs/using/processors/referencing-fields.
@@ -121,7 +121,7 @@ For more information about the format, see [Referencing fields](https://conduit.
This example decodes the base64 encoded string stored in
`.Payload.After`. Note that the result is a string, so if you want to
further process the result (e.g. parse the string as JSON), you need to chain
-other processors (e.g. [`json.decode`](/docs/processors/builtin/json.decode)).
+other processors (e.g. [`json.decode`](/docs/using/processors/builtin/json.decode)).
#### Configuration parameters
diff --git a/docs/processors/builtin/base64.encode.mdx b/docs/1-using/5-processors/1-builtin/base64.encode.mdx
similarity index 97%
rename from docs/processors/builtin/base64.encode.mdx
rename to docs/1-using/5-processors/1-builtin/base64.encode.mdx
index b8626189..b69efda2 100644
--- a/docs/processors/builtin/base64.encode.mdx
+++ b/docs/1-using/5-processors/1-builtin/base64.encode.mdx
@@ -40,7 +40,7 @@ pipelines:
# Field is a reference to the target field. Note that it is not
# allowed to base64 encode the `.Position` field.
# For more information about the format, see [Referencing
- # fields](https://conduit.io/docs/processors/referencing-fields).
+ # fields](https://conduit.io/docs/using/processors/referencing-fields.
# Type: string
field: ""
# Whether to decode the record key using its corresponding schema from
@@ -77,7 +77,7 @@ pipelines:
Field is a reference to the target field. Note that it is not allowed to
base64 encode the `.Position` field.
-For more information about the format, see [Referencing fields](https://conduit.io/docs/processors/referencing-fields).
+For more information about the format, see [Referencing fields](https://conduit.io/docs/using/processors/referencing-fields.
diff --git a/docs/processors/builtin/custom.javascript.mdx b/docs/1-using/5-processors/1-builtin/custom.javascript.mdx
similarity index 100%
rename from docs/processors/builtin/custom.javascript.mdx
rename to docs/1-using/5-processors/1-builtin/custom.javascript.mdx
diff --git a/docs/processors/builtin/error.mdx b/docs/1-using/5-processors/1-builtin/error.mdx
similarity index 99%
rename from docs/processors/builtin/error.mdx
rename to docs/1-using/5-processors/1-builtin/error.mdx
index 8e10c3a6..61dbf9a1 100644
--- a/docs/processors/builtin/error.mdx
+++ b/docs/1-using/5-processors/1-builtin/error.mdx
@@ -20,7 +20,7 @@ Returns an error for all records that get passed to the processor.
Any time a record is passed to this processor it returns an error,
which results in the record being sent to the DLQ if it's configured, or the pipeline stopping.
-**Important:** Make sure to add a [condition](https://conduit.io/docs/processors/conditions)
+**Important:** Make sure to add a [condition](https://conduit.io/docs/using/processors//conditions)
to this processor, otherwise all records will trigger an error.
## Configuration parameters
diff --git a/docs/processors/builtin/field.convert.mdx b/docs/1-using/5-processors/1-builtin/field.convert.mdx
similarity index 98%
rename from docs/processors/builtin/field.convert.mdx
rename to docs/1-using/5-processors/1-builtin/field.convert.mdx
index e823473f..8d951827 100644
--- a/docs/processors/builtin/field.convert.mdx
+++ b/docs/1-using/5-processors/1-builtin/field.convert.mdx
@@ -21,7 +21,7 @@ Convert takes the field of one type and converts it into another type (e.g. stri
The applicable types are string, int, float and bool. Converting can be done between any combination of types. Note that
booleans will be converted to numeric values 1 (true) and 0 (false). Processor is only applicable to `.Key`, `.Payload.Before`
and `.Payload.After` prefixes, and only applicable if said fields contain structured data.
-If the record contains raw JSON data, then use the processor [`json.decode`](/docs/processors/builtin/json.decode)
+If the record contains raw JSON data, then use the processor [`json.decode`](/docs/using/processors/builtin/json.decode)
to parse it into structured data first.
## Configuration parameters
@@ -43,7 +43,7 @@ pipelines:
# can only convert fields in structured data under `.Key` and
# `.Payload`.
# For more information about the format, see [Referencing
- # fields](https://conduit.io/docs/processors/referencing-fields).
+ # fields](https://conduit.io/docs/using/processors/referencing-fields.
# Type: string
field: ""
# Whether to decode the record key using its corresponding schema from
@@ -85,7 +85,7 @@ pipelines:
Note that you can only convert fields in structured data under `.Key` and
`.Payload`.
-For more information about the format, see [Referencing fields](https://conduit.io/docs/processors/referencing-fields).
+For more information about the format, see [Referencing fields](https://conduit.io/docs/using/processors/referencing-fields.
diff --git a/docs/processors/builtin/field.exclude.mdx b/docs/1-using/5-processors/1-builtin/field.exclude.mdx
similarity index 96%
rename from docs/processors/builtin/field.exclude.mdx
rename to docs/1-using/5-processors/1-builtin/field.exclude.mdx
index e6ad4ca5..55c5b4c1 100644
--- a/docs/processors/builtin/field.exclude.mdx
+++ b/docs/1-using/5-processors/1-builtin/field.exclude.mdx
@@ -22,7 +22,7 @@ If a field is excluded that contains nested data, the whole tree will be removed
It is not allowed to exclude `.Position` or `.Operation` fields.
Note that this processor only runs on structured data, if the record contains
-raw JSON data, then use the processor [`json.decode`](/docs/processors/builtin/json.decode)
+raw JSON data, then use the processor [`json.decode`](/docs/using/processors/builtin/json.decode)
to parse it into structured data first.
## Configuration parameters
@@ -43,7 +43,7 @@ pipelines:
# Fields is a comma separated list of target fields which should be
# excluded.
# For more information about the format, see [Referencing
- # fields](https://conduit.io/docs/processors/referencing-fields).
+ # fields](https://conduit.io/docs/using/processors/referencing-fields.
# Type: string
fields: ""
# Whether to decode the record key using its corresponding schema from
@@ -79,7 +79,7 @@ pipelines:
Fields is a comma separated list of target fields which should be excluded.
-For more information about the format, see [Referencing fields](https://conduit.io/docs/processors/referencing-fields).
+For more information about the format, see [Referencing fields](https://conduit.io/docs/using/processors/referencing-fields.
diff --git a/docs/processors/builtin/field.rename.mdx b/docs/1-using/5-processors/1-builtin/field.rename.mdx
similarity index 95%
rename from docs/processors/builtin/field.rename.mdx
rename to docs/1-using/5-processors/1-builtin/field.rename.mdx
index fedbf4e4..600073ab 100644
--- a/docs/processors/builtin/field.rename.mdx
+++ b/docs/1-using/5-processors/1-builtin/field.rename.mdx
@@ -22,7 +22,7 @@ allowed to rename top-level fields (`.Operation`, `.Position`,
`.Key`, `.Metadata`, `.Payload.Before`, `.Payload.After`).
Note that this processor only runs on structured data, if the record contains raw
-JSON data, then use the processor [`json.decode`](/docs/processors/builtin/json.decode)
+JSON data, then use the processor [`json.decode`](/docs/using/processors/builtin/json.decode)
to parse it into structured data first.
## Configuration parameters
@@ -44,7 +44,7 @@ pipelines:
# their new names (keys and values are separated by colons ":").
# For example: `.Metadata.key:id,.Payload.After.foo:bar`.
# For more information about the format, see [Referencing
- # fields](https://conduit.io/docs/processors/referencing-fields).
+ # fields](https://conduit.io/docs/using/processors/referencing-fields.
# Type: string
mapping: ""
# Whether to decode the record key using its corresponding schema from
@@ -83,7 +83,7 @@ new names (keys and values are separated by colons ":").
For example: `.Metadata.key:id,.Payload.After.foo:bar`.
-For more information about the format, see [Referencing fields](https://conduit.io/docs/processors/referencing-fields).
+For more information about the format, see [Referencing fields](https://conduit.io/docs/using/processors/referencing-fields.
diff --git a/docs/processors/builtin/field.set.mdx b/docs/1-using/5-processors/1-builtin/field.set.mdx
similarity index 97%
rename from docs/processors/builtin/field.set.mdx
rename to docs/1-using/5-processors/1-builtin/field.set.mdx
index 0bec6388..313cedd5 100644
--- a/docs/processors/builtin/field.set.mdx
+++ b/docs/1-using/5-processors/1-builtin/field.set.mdx
@@ -22,7 +22,7 @@ The new value can be a Go template expression, the processor will evaluate the o
If the provided `field` doesn't exist, the processor will create that field and assign its value.
This processor can be used for multiple purposes, like extracting fields, hoisting data, inserting fields, copying fields, masking fields, etc.
Note that this processor only runs on structured data, if the record contains raw JSON data, then use the processor
-[`json.decode`](/docs/processors/builtin/json.decode) to parse it into structured data first.
+[`json.decode`](/docs/using/processors/builtin/json.decode) to parse it into structured data first.
## Configuration parameters
@@ -42,7 +42,7 @@ pipelines:
# Field is the target field that will be set. Note that it is not
# allowed to set the `.Position` field.
# For more information about the format, see [Referencing
- # fields](https://conduit.io/docs/processors/referencing-fields).
+ # fields](https://conduit.io/docs/using/processors/referencing-fields.
# Type: string
field: ""
# Whether to decode the record key using its corresponding schema from
@@ -83,7 +83,7 @@ pipelines:
Field is the target field that will be set. Note that it is not allowed
to set the `.Position` field.
-For more information about the format, see [Referencing fields](https://conduit.io/docs/processors/referencing-fields).
+For more information about the format, see [Referencing fields](https://conduit.io/docs/using/processors/referencing-fields.
diff --git a/docs/processors/builtin/filter.mdx b/docs/1-using/5-processors/1-builtin/filter.mdx
similarity index 99%
rename from docs/processors/builtin/filter.mdx
rename to docs/1-using/5-processors/1-builtin/filter.mdx
index dbc27088..b07e7371 100644
--- a/docs/processors/builtin/filter.mdx
+++ b/docs/1-using/5-processors/1-builtin/filter.mdx
@@ -21,7 +21,7 @@ Acknowledges all records that get passed to the filter, so
the records will be filtered out if the condition provided to the processor is
evaluated to `true`.
-**Important:** Make sure to add a [condition](https://conduit.io/docs/processors/conditions)
+**Important:** Make sure to add a [condition](https://conduit.io/docs/using/processors//conditions)
to this processor, otherwise all records will be filtered out.
## Configuration parameters
diff --git a/docs/processors/builtin/index.mdx b/docs/1-using/5-processors/1-builtin/index.mdx
similarity index 92%
rename from docs/processors/builtin/index.mdx
rename to docs/1-using/5-processors/1-builtin/index.mdx
index ffe22005..1c23e2b8 100644
--- a/docs/processors/builtin/index.mdx
+++ b/docs/1-using/5-processors/1-builtin/index.mdx
@@ -1,5 +1,4 @@
---
-sidebar_position: 1
title: 'Builtin Processors'
---
diff --git a/docs/processors/builtin/json.decode.mdx b/docs/1-using/5-processors/1-builtin/json.decode.mdx
similarity index 98%
rename from docs/processors/builtin/json.decode.mdx
rename to docs/1-using/5-processors/1-builtin/json.decode.mdx
index 6f08696c..2359ee29 100644
--- a/docs/processors/builtin/json.decode.mdx
+++ b/docs/1-using/5-processors/1-builtin/json.decode.mdx
@@ -42,7 +42,7 @@ pipelines:
# Field is a reference to the target field. Only fields that are under
# `.Key` and `.Payload` can be decoded.
# For more information about the format, see [Referencing
- # fields](https://conduit.io/docs/processors/referencing-fields).
+ # fields](https://conduit.io/docs/using/processors/referencing-fields.
# Type: string
field: ""
# Whether to decode the record key using its corresponding schema from
@@ -79,7 +79,7 @@ pipelines:
Field is a reference to the target field. Only fields that are under
`.Key` and `.Payload` can be decoded.
-For more information about the format, see [Referencing fields](https://conduit.io/docs/processors/referencing-fields).
+For more information about the format, see [Referencing fields](https://conduit.io/docs/using/processors/referencing-fields.
diff --git a/docs/processors/builtin/json.encode.mdx b/docs/1-using/5-processors/1-builtin/json.encode.mdx
similarity index 97%
rename from docs/processors/builtin/json.encode.mdx
rename to docs/1-using/5-processors/1-builtin/json.encode.mdx
index eb991b34..3aa2f0f1 100644
--- a/docs/processors/builtin/json.encode.mdx
+++ b/docs/1-using/5-processors/1-builtin/json.encode.mdx
@@ -41,7 +41,7 @@ pipelines:
# Field is a reference to the target field. Only fields that are under
# `.Key` and `.Payload` can be encoded.
# For more information about the format, see [Referencing
- # fields](https://conduit.io/docs/processors/referencing-fields).
+ # fields](https://conduit.io/docs/using/processors/referencing-fields.
# Type: string
field: ""
# Whether to decode the record key using its corresponding schema from
@@ -78,7 +78,7 @@ pipelines:
Field is a reference to the target field. Only fields that are under
`.Key` and `.Payload` can be encoded.
-For more information about the format, see [Referencing fields](https://conduit.io/docs/processors/referencing-fields).
+For more information about the format, see [Referencing fields](https://conduit.io/docs/using/processors/referencing-fields.
diff --git a/docs/processors/builtin/unwrap.debezium.mdx b/docs/1-using/5-processors/1-builtin/unwrap.debezium.mdx
similarity index 95%
rename from docs/processors/builtin/unwrap.debezium.mdx
rename to docs/1-using/5-processors/1-builtin/unwrap.debezium.mdx
index c775ecd3..7ab96d91 100644
--- a/docs/processors/builtin/unwrap.debezium.mdx
+++ b/docs/1-using/5-processors/1-builtin/unwrap.debezium.mdx
@@ -13,7 +13,7 @@ import TabItem from '@theme/TabItem';
# `unwrap.debezium`
-Unwraps a Debezium record from the input [OpenCDC record](https://conduit.io/docs/features/opencdc-record).
+Unwraps a Debezium record from the input [OpenCDC record](https://conduit.io/docs/using/opencdc-record).
## Description
@@ -23,7 +23,7 @@ completely, except for the position.
The Debezium record's metadata and the wrapping record's metadata is merged, with the Debezium metadata having precedence.
This is useful in cases where Conduit acts as an intermediary between a Debezium source and a Debezium destination.
-In such cases, the Debezium record is set as the [OpenCDC record](https://conduit.io/docs/features/opencdc-record)'s payload,and needs to be unwrapped for further usage.
+In such cases, the Debezium record is set as the [OpenCDC record](https://conduit.io/docs/using/opencdc-record)'s payload,and needs to be unwrapped for further usage.
## Configuration parameters
@@ -42,7 +42,7 @@ pipelines:
settings:
# Field is a reference to the field that contains the Debezium record.
# For more information about the format, see [Referencing
- # fields](https://conduit.io/docs/processors/referencing-fields).
+ # fields](https://conduit.io/docs/using/processors/referencing-fields.
# Type: string
field: ".Payload.After"
# Whether to decode the record key using its corresponding schema from
@@ -78,7 +78,7 @@ pipelines:
Field is a reference to the field that contains the Debezium record.
-For more information about the format, see [Referencing fields](https://conduit.io/docs/processors/referencing-fields).
+For more information about the format, see [Referencing fields](https://conduit.io/docs/using/processors/referencing-fields.
diff --git a/docs/processors/builtin/unwrap.kafkaconnect.mdx b/docs/1-using/5-processors/1-builtin/unwrap.kafkaconnect.mdx
similarity index 93%
rename from docs/processors/builtin/unwrap.kafkaconnect.mdx
rename to docs/1-using/5-processors/1-builtin/unwrap.kafkaconnect.mdx
index 784b437b..8704cc12 100644
--- a/docs/processors/builtin/unwrap.kafkaconnect.mdx
+++ b/docs/1-using/5-processors/1-builtin/unwrap.kafkaconnect.mdx
@@ -13,16 +13,16 @@ import TabItem from '@theme/TabItem';
# `unwrap.kafkaconnect`
-Unwraps a Kafka Connect record from an [OpenCDC record](https://conduit.io/docs/features/opencdc-record).
+Unwraps a Kafka Connect record from an [OpenCDC record](https://conduit.io/docs/using/opencdc-record).
## Description
-This processor unwraps a Kafka Connect record from the input [OpenCDC record](https://conduit.io/docs/features/opencdc-record).
+This processor unwraps a Kafka Connect record from the input [OpenCDC record](https://conduit.io/docs/using/opencdc-record).
The input record's payload is replaced with the Kafka Connect record.
This is useful in cases where Conduit acts as an intermediary between a Debezium source and a Debezium destination.
-In such cases, the Debezium record is set as the [OpenCDC record](https://conduit.io/docs/features/opencdc-record)'s payload, and needs to be unwrapped for further usage.
+In such cases, the Debezium record is set as the [OpenCDC record](https://conduit.io/docs/using/opencdc-record)'s payload, and needs to be unwrapped for further usage.
## Configuration parameters
@@ -42,7 +42,7 @@ pipelines:
# Field is a reference to the field that contains the Kafka Connect
# record.
# For more information about the format, see [Referencing
- # fields](https://conduit.io/docs/processors/referencing-fields).
+ # fields](https://conduit.io/docs/using/processors/referencing-fields.
# Type: string
field: ".Payload.After"
# Whether to decode the record key using its corresponding schema from
@@ -78,7 +78,7 @@ pipelines:
Field is a reference to the field that contains the Kafka Connect record.
-For more information about the format, see [Referencing fields](https://conduit.io/docs/processors/referencing-fields).
+For more information about the format, see [Referencing fields](https://conduit.io/docs/using/processors/referencing-fields.
@@ -124,7 +124,7 @@ For more information about the format, see [Referencing fields](https://conduit.
This example shows how to unwrap a Kafka Connect record.
The Kafka Connect record is serialized as a JSON string in the `.Payload.After` field (raw data).
-The Kafka Connect record's payload will replace the [OpenCDC record](https://conduit.io/docs/features/opencdc-record)'s payload.
+The Kafka Connect record's payload will replace the [OpenCDC record](https://conduit.io/docs/using/opencdc-record)'s payload.
We also see how the key is unwrapped too. In this case, the key comes in as structured data.
diff --git a/docs/processors/builtin/unwrap.opencdc.mdx b/docs/1-using/5-processors/1-builtin/unwrap.opencdc.mdx
similarity index 88%
rename from docs/processors/builtin/unwrap.opencdc.mdx
rename to docs/1-using/5-processors/1-builtin/unwrap.opencdc.mdx
index b9ef6a08..760d7e26 100644
--- a/docs/processors/builtin/unwrap.opencdc.mdx
+++ b/docs/1-using/5-processors/1-builtin/unwrap.opencdc.mdx
@@ -13,15 +13,15 @@ import TabItem from '@theme/TabItem';
# `unwrap.opencdc`
-Unwraps an [OpenCDC record](https://conduit.io/docs/features/opencdc-record) saved in one of the record's fields.
+Unwraps an [OpenCDC record](https://conduit.io/docs/using/opencdc-record) saved in one of the record's fields.
## Description
The `unwrap.opencdc` processor is useful in situations where a record goes through intermediate
-systems before being written to a final destination. In these cases, the original [OpenCDC record](https://conduit.io/docs/features/opencdc-record) is part of the payload
+systems before being written to a final destination. In these cases, the original [OpenCDC record](https://conduit.io/docs/using/opencdc-record) is part of the payload
read from the intermediate system and needs to be unwrapped before being written.
-Note: if the wrapped [OpenCDC record](https://conduit.io/docs/features/opencdc-record) is not in a structured data field, then it's assumed that it's stored in JSON format.
+Note: if the wrapped [OpenCDC record](https://conduit.io/docs/using/opencdc-record) is not in a structured data field, then it's assumed that it's stored in JSON format.
## Configuration parameters
@@ -40,7 +40,7 @@ pipelines:
settings:
# Field is a reference to the field that contains the OpenCDC record.
# For more information about the format, see [Referencing
- # fields](https://conduit.io/docs/processors/referencing-fields).
+ # fields](https://conduit.io/docs/using/processors/referencing-fields.
# Type: string
field: ".Payload.After"
# Whether to decode the record key using its corresponding schema from
@@ -76,7 +76,7 @@ pipelines:
Field is a reference to the field that contains the OpenCDC record.
-For more information about the format, see [Referencing fields](https://conduit.io/docs/processors/referencing-fields).
+For more information about the format, see [Referencing fields](https://conduit.io/docs/using/processors/referencing-fields.
@@ -117,9 +117,9 @@ For more information about the format, see [Referencing fields](https://conduit.
## Examples
-### Unwrap an [OpenCDC record](https://conduit.io/docs/features/opencdc-record)
+### Unwrap an [OpenCDC record](https://conduit.io/docs/using/opencdc-record)
-In this example we use the `unwrap.opencdc` processor to unwrap the [OpenCDC record](https://conduit.io/docs/features/opencdc-record) found in the record's `.Payload.After` field.
+In this example we use the `unwrap.opencdc` processor to unwrap the [OpenCDC record](https://conduit.io/docs/using/opencdc-record) found in the record's `.Payload.After` field.
#### Configuration parameters
diff --git a/docs/processors/builtin/webhook.http.mdx b/docs/1-using/5-processors/1-builtin/webhook.http.mdx
similarity index 98%
rename from docs/processors/builtin/webhook.http.mdx
rename to docs/1-using/5-processors/1-builtin/webhook.http.mdx
index d9f3bd8e..8f6e9253 100644
--- a/docs/processors/builtin/webhook.http.mdx
+++ b/docs/1-using/5-processors/1-builtin/webhook.http.mdx
@@ -81,13 +81,13 @@ pipelines:
request.url: ""
# Specifies in which field should the response body be saved.
# For more information about the format, see [Referencing
- # fields](https://conduit.io/docs/processors/referencing-fields).
+ # fields](https://conduit.io/docs/using/processors/referencing-fields.
# Type: string
response.body: ".Payload.After"
# Specifies in which field should the response status be saved. If no
# value is set, then the response status will NOT be saved.
# For more information about the format, see [Referencing
- # fields](https://conduit.io/docs/processors/referencing-fields).
+ # fields](https://conduit.io/docs/using/processors/referencing-fields.
# Type: string
response.status: ""
# Whether to decode the record key using its corresponding schema from
@@ -203,7 +203,7 @@ to make it easier to write templates.
Specifies in which field should the response body be saved.
-For more information about the format, see [Referencing fields](https://conduit.io/docs/processors/referencing-fields).
+For more information about the format, see [Referencing fields](https://conduit.io/docs/using/processors/referencing-fields.
@@ -214,7 +214,7 @@ For more information about the format, see [Referencing fields](https://conduit.
Specifies in which field should the response status be saved. If no value
is set, then the response status will NOT be saved.
-For more information about the format, see [Referencing fields](https://conduit.io/docs/processors/referencing-fields).
+For more information about the format, see [Referencing fields](https://conduit.io/docs/using/processors/referencing-fields.
diff --git a/docs/processors/conditions.mdx b/docs/1-using/5-processors/2-conditions.mdx
similarity index 75%
rename from docs/processors/conditions.mdx
rename to docs/1-using/5-processors/2-conditions.mdx
index a0764cde..414e8a3b 100644
--- a/docs/processors/conditions.mdx
+++ b/docs/1-using/5-processors/2-conditions.mdx
@@ -1,12 +1,11 @@
---
title: 'Conditional Execution'
-sidebar_position: 3
---
-When a [processor](/docs/connectors/getting-started) is attached to a connector or to a pipeline, we may still
+When a [processor](/docs/using/connectors/getting-started) is attached to a connector or to a pipeline, we may still
want to specify conditions for its execution. To do this, we can add a `condition`
-key to the processor definition in the [Pipeline Configuration File](/docs/processors/getting-started#using-a-pipeline-configuration-file), or using
-the [HTTP API](/docs/processors/getting-started#using-the-http-api) including this parameter in the [POST request](/api#post-/v1/processors).
+key to the processor definition in the [Pipeline Configuration File](/docs/using/processors/getting-started#using-a-pipeline-configuration-file), or using
+the [HTTP API](/docs/using/processors/getting-started#using-the-http-api) including this parameter in the [POST request](/api#post-/v1/processors).