Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove kafka experimental flag for next release #159

Merged
merged 1 commit into from
Aug 31, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ A couple of settings deserve special attention:

- Loki (`spec.loki`): configure here how to reach Loki. The default values match the Loki quick install paths mentioned in the _Getting Started_ section, but you may have to configure differently if you used another installation method.

- Kafka (`spec.kafka`): _experimental_ - when enabled, integrate the flow collection pipeline with Kafka, by splitting ingestion from transformation (kube enrichment, derived metrics, ...). Assumes Kafka is already deployed and a topic is created. For convenience, we provide a quick deployment using [strimzi](https://strimzi.io/): run `make deploy-kafka` from the repository.
- Kafka (`spec.kafka`): when enabled, integrate the flow collection pipeline with Kafka, by splitting ingestion from transformation (kube enrichment, derived metrics, ...). Kafka can provide better scalability, resiliency and high availability ([view more details](https://www.redhat.com/en/topics/integration/what-is-apache-kafka)). Assumes Kafka is already deployed and a topic is created. For convenience, we provide a quick deployment using [strimzi](https://strimzi.io/): run `make deploy-kafka` from the repository.

## Development & building from sources

Expand Down
2 changes: 1 addition & 1 deletion api/v1alpha1/flowcollector_types.go
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ type FlowCollectorSpec struct {
Loki FlowCollectorLoki `json:"loki,omitempty"`

// Kafka configuration, allowing to use Kafka as a broker as part of the flow collection pipeline.
// This is a new and experimental feature, not yet recommended to use in production.
// Kafka can provide better scalability, resiliency and high availability (for more details, see https://www.redhat.com/en/topics/integration/what-is-apache-kafka).
// +optional
Kafka FlowCollectorKafka `json:"kafka,omitempty"`

Expand Down
5 changes: 3 additions & 2 deletions config/crd/bases/flows.netobserv.io_flowcollectors.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1491,8 +1491,9 @@ spec:
type: object
kafka:
description: Kafka configuration, allowing to use Kafka as a broker
as part of the flow collection pipeline. This is a new and experimental
feature, not yet recommended to use in production.
as part of the flow collection pipeline. Kafka can provide better
scalability, resiliency and high availability (for more details,
see https://www.redhat.com/en/topics/integration/what-is-apache-kafka).
properties:
address:
default: ""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ spec:

- Loki (`spec.loki`): configure here how to reach Loki. The default values match the Loki quick install paths mentioned above, but you may have to configure differently if you used another installation method.

- Kafka (`spec.kafka`): _experimental_ - when enabled, integrate the flow collection pipeline with Kafka, by splitting ingestion from transformation (kube enrichment, derived metrics, ...). Assumes Kafka is already deployed and a topic is created.
- Kafka (`spec.kafka`): when enabled, integrate the flow collection pipeline with Kafka, by splitting ingestion from transformation (kube enrichment, derived metrics, ...). Kafka can provide better scalability, resiliency and high availability ([view more details](https://www.redhat.com/en/topics/integration/what-is-apache-kafka)). Assumes Kafka is already deployed and a topic is created.

## Overview

Expand Down
4 changes: 2 additions & 2 deletions docs/FlowCollector.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ FlowCollectorSpec defines the desired state of FlowCollector
<td><b><a href="#flowcollectorspeckafka">kafka</a></b></td>
<td>object</td>
<td>
Kafka configuration, allowing to use Kafka as a broker as part of the flow collection pipeline. This is a new and experimental feature, not yet recommended to use in production.<br/>
Kafka configuration, allowing to use Kafka as a broker as part of the flow collection pipeline. Kafka can provide better scalability, resiliency and high availability (for more details, see https://www.redhat.com/en/topics/integration/what-is-apache-kafka).<br/>
</td>
<td>false</td>
</tr><tr>
Expand Down Expand Up @@ -2584,7 +2584,7 @@ Settings related to IPFIX-based flow reporter when the "agent" property is set t



Kafka configuration, allowing to use Kafka as a broker as part of the flow collection pipeline. This is a new and experimental feature, not yet recommended to use in production.
Kafka configuration, allowing to use Kafka as a broker as part of the flow collection pipeline. Kafka can provide better scalability, resiliency and high availability (for more details, see https://www.redhat.com/en/topics/integration/what-is-apache-kafka).

<table>
<thead>
Expand Down