page_title | subcategory | description |
---|---|---|
confluent_flink_statement Resource - terraform-provider-confluent |
-> Note: It is recommended to set lifecycle { prevent_destroy = true }
on production instances to prevent accidental statement deletion. This setting rejects plans that would destroy or recreate the statement, such as attempting to change uneditable attributes. Read more about it in the Terraform docs.
provider "confluent" {
cloud_api_key = var.confluent_cloud_api_key # optionally use CONFLUENT_CLOUD_API_KEY env var
cloud_api_secret = var.confluent_cloud_api_secret # optionally use CONFLUENT_CLOUD_API_SECRET env var
}
resource "confluent_flink_statement" "random_int_table" {
organization {
id = data.confluent_organization.main.id
}
environment {
id = data.confluent_environment.staging.id
}
compute_pool {
id = confluent_flink_compute_pool.example.id
}
principal {
id = confluent_service_account.app-manager-flink.id
}
statement = "CREATE TABLE random_int_table(ts TIMESTAMP_LTZ(3), random_value INT);"
properties = {
"sql.current-catalog" = data.confluent_environment.example.display_name
"sql.current-database" = data.confluent_kafka_cluster.example.display_name
}
# Use data.confluent_flink_region.main.rest_endpoint for Basic, Standard, public Dedicated Kafka clusters
# and data.confluent_flink_region.main.private_rest_endpoint for Kafka clusters with private networking
rest_endpoint = data.confluent_flink_region.main.rest_endpoint
credentials {
key = confluent_api_key.env-admin-flink-api-key.id
secret = confluent_api_key.env-admin-flink-api-key.secret
}
lifecycle {
prevent_destroy = true
}
}
provider "confluent" {
organization_id = var.organization_id # optionally use CONFLUENT_ORGANIZATION_ID env var
environment_id = var.environment_id # optionally use CONFLUENT_ENVIRONMENT_ID env var
flink_compute_pool_id = var.flink_compute_pool_id # optionally use FLINK_COMPUTE_POOL_ID env var
flink_rest_endpoint = var.flink_rest_endpoint # optionally use FLINK_REST_ENDPOINT env var
flink_api_key = var.flink_api_key # optionally use FLINK_API_KEY env var
flink_api_secret = var.flink_api_secret # optionally use FLINK_API_SECRET env var
flink_principal_id = var.flink_principal_id # optionally use FLINK_PRINCIPAL_ID env var
}
resource "confluent_flink_statement" "example" {
statement = "CREATE TABLE random_int_table(ts TIMESTAMP_LTZ(3), random_value INT);"
properties = {
"sql.current-catalog" = var.confluent_environment_display_name
"sql.current-database" = var.confluent_kafka_cluster_display_name
}
lifecycle {
prevent_destroy = true
}
}
The following arguments are supported:
organization
(Optional Configuration Block) supports the following:id
- (Required String) The ID of the Organization, for example,1111aaaa-11aa-11aa-11aa-111111aaaaaa
.
environment
(Optional Configuration Block) supports the following:id
- (Required String) The ID of the Environment, for example,env-abc123
.
compute_pool
- (Optional Configuration Block) supports the following:id
- (Required String) The ID of the Flink Compute Pool, for example,lfcp-abc123
.
principal
- (Optional Configuration Block) supports the following:id
- (Required String) The ID of the Principal the Flink Statement runs as, for example,sa-abc123
.
statement
- (Required String) The raw SQL text statement, for example,SELECT CURRENT_TIMESTAMP;
.statement_name
- (Optional String) The ID of the Flink Statement, for example,cfeab4fe-b62c-49bd-9e99-51cc98c77a67
.rest_endpoint
- (Optional String) The REST endpoint of the Flink region, for example,https://flink.us-east-1.aws.confluent.cloud
).credentials
(Optional Configuration Block) supports the following:key
- (Required String) The Flink API Key.secret
- (Required String, Sensitive) The Flink API Secret.
-> Note: A Flink API key consists of a key and a secret. Flink API keys are required to interact with Flink Statements in Confluent Cloud. Each Flink API key is valid for one specific Flink Region.
-> Note: Use Option #2 to simplify the key rotation process. When using Option #1, to rotate a Flink API key, create a new Flink API key, update the credentials
block in all configuration files to use the new Flink API key, run terraform apply -target="confluent_flink_statement.example"
, and remove the old Flink API key. Alternatively, in case the old Flink API Key was deleted already, you might need to run terraform plan -refresh=false -target="confluent_flink_statement.example" -out=rotate-flink-api-key
and terraform apply rotate-flink-api-key
instead.
-
properties
- (Optional Map) The custom topic settings to set:name
- (Required String) The setting name, for example,sql.local-time-zone
.value
- (Required String) The setting value, for example,GMT-08:00
.
-
stopped
- (Optional Boolean) The boolean flag to control whether the running Flink Statement should be stopped. Defaults tofalse
. Update it totrue
to stop the statement. Subsequently, update it tofalse
to resume the statement.
!> Note: To stop a running statement or resume a stopped statement, no other argument can be updated except stopped
.
!> Note: Currently, only 3 Flink statements support the resuming feature, namely: CREATE TABLE AS
, INSERT INTO
, and EXECUTE STATEMENT SET
.
!> Warning: Use Option #2 to avoid exposing sensitive credentials
value in a state file. When using Option #1, Terraform doesn't encrypt the sensitive credentials
value of the confluent_flink_statement
resource, so you must keep your state file secure to avoid exposing it. Refer to the Terraform documentation to learn more about securing your state file.
In addition to the preceding arguments, the following attributes are exported:
id
- (Required String) The ID of the Flink statement, in the format<Environment ID>/<Flink Compute Pool ID>/<Flink Statement name>
, for example,env-abc123/lfcp-xyz123/cfeab4fe-b62c-49bd-9e99-51cc98c77a67
.
You can import a Flink statement by using the Flink Statement name, for example:
# Option #1: Manage multiple Flink Compute Pools in the same Terraform workspace
$ export IMPORT_CONFLUENT_ORGANIZATION_ID="<organization_id>"
$ export IMPORT_CONFLUENT_ENVIRONMENT_ID="<environment_id>"
$ export IMPORT_FLINK_COMPUTE_POOL_ID="<flink_compute_pool_id>"
$ export IMPORT_FLINK_API_KEY="<flink_api_key>"
$ export IMPORT_FLINK_API_SECRET="<flink_api_secret>"
$ export IMPORT_FLINK_REST_ENDPOINT="<flink_rest_endpoint>"
$ export IMPORT_FLINK_PRINCIPAL_ID="<flink_rest_endpoint>"
$ terraform import confluent_flink_statement.example cfeab4fe-b62c-49bd-9e99-51cc98c77a67
# Option #2: Manage a single Flink Compute Pool in the same Terraform workspace
$ terraform import confluent_flink_statement.example cfeab4fe-b62c-49bd-9e99-51cc98c77a67
!> Warning: Do not forget to delete terminal command history afterwards for security purposes.
The following end-to-end example might help to get started with Flink Statements: