Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

arbitrary sidecar support #308

Closed
wants to merge 6 commits into from

Conversation

theRealWardo
Copy link

see the discussion in #264

README.md Outdated
By default, no monitoring is enabled. Monitoring can be done in two ways:

*Bake it into the image.* You can extend PostgreSQL to have monitoring built it with extensions such as
[bg_mon](https://github.com/CyberDem0n/bg_mon). You could build a custom Docker image and use it for all
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd say you need to build a custom Docker image based on Spilo, since the operator is tightly coupled to that, relying on Patroni to be there and passing environment variables with names accepted by Spilo (i.e. SPILO_CONFIGURATION).

README.md Outdated
*Deploy a sidecar*, another container that runs along side PostgreSQL which connects to PostgreSQL
specifically to run monitoring operators. This container could be a Prometheus agent, Telegraf, or
any other custom monitoring container you would like to run. In the same way that `dockerImage` can
be specified above, `monitoringDockerImage` can also be specified. An addition container will be run
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd change the parameter names to mention just sidecar, not monitoring, to be more generic, although keep the scope of this chapter to be monitoring.

// TODO: figure out auth.
/*
{
Name: "POSTGRES_USER",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can expose them from the secrets, i.e.

ValueFrom: &v1.EnvVarSource{
				SecretKeyRef: &v1.SecretKeySelector{
					LocalObjectReference: v1.LocalObjectReference{
						Name: c.credentialSecretName(c.OpConfig.SuperUsername),
					},
					Key: "password",
				},
			},

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

podSpec.Containers,
v1.Container{
Name: "monitoring-sidecar",
Image: c.monitoringImage(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like it is not defined anywhere.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yea this was a sketch and I was trying to figure out how to get my environment set up... all fixed up and working now.

@theRealWardo theRealWardo changed the title a monitoring side car arbitrary sidecar support Jun 1, 2018
@coveralls
Copy link

coveralls commented Jun 1, 2018

Coverage Status

Coverage decreased (-0.1%) to 11.024% when pulling 6b2a02b on theRealWardo:monitoring into 681656c on zalando-incubator:master.

@theRealWardo
Copy link
Author

okay so I tried to do something like Sidecars []v1.Container rather than a custom Sidecar type, but there were some problems specifically with the handling of resources there.

that of course means that there's a bit more of a painful path to supporting things like adding custom volumes/mounts here, but I think this is probably a decent start. just let me know what you think!

@alexeyklyukin
Copy link
Contributor

@theRealWardo thanks for updating the PR. I see little use in modifying individual cluster manifests. What if you run 100 clusters managed by the operator, would you ask each one of them to change its definition to incorporate the same container image (be it monitoring or log support)? Instead, we can pass this information via the operator configuration to configure all clusters managed by the operator at once and uniformly. It could be a parameter of the operator itself, i.e:

sidecar_containers: "name:image, name:image, name:image". 

We could pass the necessary minimum of environment variables like credentials to those containers via the c.generatePodTemplate and alike and use pod_environment_configmap to propagate the rest.

@theRealWardo
Copy link
Author

@alexeyklyukin while I do agree there is a use case for cluster wide sidecars, I would propose supporting both. I think that there is a use case for configuring an individual cluster's sidecars differently from other clusters' sidecars.

I also believe this project has poor support for structured data in the operator config. how am I supposed to specify ports for example? the structured string in the operator config is extremely hard to read and therefore error prone. it further requires maintaining separate object definition from the standard Kubernetes objects, making the learning curve higher, and thus reducing potential traction of this operator. for these reasons, I avoided the operator config for now and have proposed a change to the cluster definition.

@theRealWardo
Copy link
Author

@alexeyklyukin friendly ping. would love to get this or something like it in. we've been running this code in our staging environment fine for the last week or so. 😄

@alexeyklyukin
Copy link
Contributor

@alexeyklyukin while I do agree there is a use case for cluster wide sidecars, I would propose supporting both. I think that there is a use case for configuring an individual cluster's sidecars differently from other clusters' sidecars.

I see none. What is your use case for that?

I also believe this project has poor support for structured data in the operator config. how am I supposed to specify ports for example? the structured string in the operator config is extremely hard to read and therefore error prone. it further requires maintaining separate object definition from the standard Kubernetes objects, making the learning curve higher, and thus reducing potential traction of this operator. for these reasons, I avoided the operator config for now and have proposed a change to the cluster definition.

Fair enough, I have some plans to address that.

@alexeyklyukin
Copy link
Contributor

@alexeyklyukin friendly ping. would love to get this or something like it in. we've been running this code in our staging environment fine for the last week or so. 😄

I am against merging it w/o support for the sidecars in the operator configuration.

alexeyklyukin added a commit that referenced this pull request Jun 14, 2018
configmap.  That has a number of limitations, i.e. when the
configuration value is not a scalar, but a map or a list. We use a
custom code based on github.com/kelseyhightower/envconfig to decode
non-scalar values out of plain text keys, but that breaks when the data
inside the keys contains both YAML-special elememtns (i.e. commas) and
complex quotes, one good example for that is search_path inside
`team_api_role_configuration`. In addition, reliance on the configmap
forced a flag structure on the configuration, making it hard to write
and to read (see
#308 (comment)).

This commit allows to supply the operator configuration in a proper YAML
file. That required registering a custom CRD to support the operator
configuration and provide an example at
manifests/postgresql-operator-default-configuration.yaml. At the moment,
both old configmap and the new CRD configuration is supported, so no
compatibility issues, however, in the future I'd like to deprecate the
configmap-based configuration altogether. Contrary to the
configmap-based configuration, the CRD one doesn't embed defaults into
the operator code, however, one can use the
manifests/postgresql-operator-default-configuration.yaml as a starting
point in order to build a custom configuration.

Since previously `ReadyWaitInterval` and `ReadyWaitTimeout` parameters
used to create the CRD were taken from the operator configuration, which
is not possible if the configuration itself is stored in the CRD object,
I've added the ability to specify them as environment variables
`CRD_READY_WAIT_INTERVAL` and `CRD_READY_WAIT_TIMEOUT` respectively.
@alexeyklyukin
Copy link
Contributor

Looks like it is quite trivial to add global sidecars there, I did some quick-and-dirty work at https://github.com/zalando-incubator/postgres-operator/tree/global_and_per_cluster_sidecards (I also resolved the documentation conflict with the master in that PR).

@alexeyklyukin
Copy link
Contributor

@theRealWardo I have created #331 based on your PR that adds support for global containers and does some refactoring. I have considered doing it against your own branch, but since the latest master is not merged in there the PR shows too many unrelated changes.

I'd appreciate if you take a look.

@alexeyklyukin
Copy link
Contributor

@theRealWardo thank you for your work on this, we merged your changes together with support for global sidecars in 25a3062.

As a next step to that I see merging of #326 and, subsequently, providing richer operator-wide configuration for sidecars.

alexeyklyukin added a commit that referenced this pull request Jul 16, 2018
* Up until now, the operator read its own configuration from the
configmap.  That has a number of limitations, i.e. when the
configuration value is not a scalar, but a map or a list. We use a
custom code based on github.com/kelseyhightower/envconfig to decode
non-scalar values out of plain text keys, but that breaks when the data
inside the keys contains both YAML-special elememtns (i.e. commas) and
complex quotes, one good example for that is search_path inside
`team_api_role_configuration`. In addition, reliance on the configmap
forced a flag structure on the configuration, making it hard to write
and to read (see
#308 (comment)).

The changes allow to supply the operator configuration in a proper YAML
file. That required registering a custom CRD to support the operator
configuration and provide an example at
manifests/postgresql-operator-default-configuration.yaml. At the moment,
both old configmap and the new CRD configuration is supported, so no
compatibility issues, however, in the future I'd like to deprecate the
configmap-based configuration altogether. Contrary to the
configmap-based configuration, the CRD one doesn't embed defaults into
the operator code, however, one can use the
manifests/postgresql-operator-default-configuration.yaml as a starting
point in order to build a custom configuration.

Since previously `ReadyWaitInterval` and `ReadyWaitTimeout` parameters
used to create the CRD were taken from the operator configuration, which
is not possible if the configuration itself is stored in the CRD object,
I've added the ability to specify them as environment variables
`CRD_READY_WAIT_INTERVAL` and `CRD_READY_WAIT_TIMEOUT` respectively.

Per review by @zerg-junior  and  @Jan-M.
YuriSalesquw pushed a commit to YuriSalesquw/postgres-operator that referenced this pull request Dec 26, 2022
* Up until now, the operator read its own configuration from the
configmap.  That has a number of limitations, i.e. when the
configuration value is not a scalar, but a map or a list. We use a
custom code based on github.com/kelseyhightower/envconfig to decode
non-scalar values out of plain text keys, but that breaks when the data
inside the keys contains both YAML-special elememtns (i.e. commas) and
complex quotes, one good example for that is search_path inside
`team_api_role_configuration`. In addition, reliance on the configmap
forced a flag structure on the configuration, making it hard to write
and to read (see
zalando/postgres-operator#308 (comment)).

The changes allow to supply the operator configuration in a proper YAML
file. That required registering a custom CRD to support the operator
configuration and provide an example at
manifests/postgresql-operator-default-configuration.yaml. At the moment,
both old configmap and the new CRD configuration is supported, so no
compatibility issues, however, in the future I'd like to deprecate the
configmap-based configuration altogether. Contrary to the
configmap-based configuration, the CRD one doesn't embed defaults into
the operator code, however, one can use the
manifests/postgresql-operator-default-configuration.yaml as a starting
point in order to build a custom configuration.

Since previously `ReadyWaitInterval` and `ReadyWaitTimeout` parameters
used to create the CRD were taken from the operator configuration, which
is not possible if the configuration itself is stored in the CRD object,
I've added the ability to specify them as environment variables
`CRD_READY_WAIT_INTERVAL` and `CRD_READY_WAIT_TIMEOUT` respectively.

Per review by @zerg-junior  and  @Jan-M.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants