-
Notifications
You must be signed in to change notification settings - Fork 994
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
arbitrary sidecar support #308
Conversation
README.md
Outdated
By default, no monitoring is enabled. Monitoring can be done in two ways: | ||
|
||
*Bake it into the image.* You can extend PostgreSQL to have monitoring built it with extensions such as | ||
[bg_mon](https://github.com/CyberDem0n/bg_mon). You could build a custom Docker image and use it for all |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd say you need to build a custom Docker image based on Spilo, since the operator is tightly coupled to that, relying on Patroni to be there and passing environment variables with names accepted by Spilo (i.e. SPILO_CONFIGURATION
).
README.md
Outdated
*Deploy a sidecar*, another container that runs along side PostgreSQL which connects to PostgreSQL | ||
specifically to run monitoring operators. This container could be a Prometheus agent, Telegraf, or | ||
any other custom monitoring container you would like to run. In the same way that `dockerImage` can | ||
be specified above, `monitoringDockerImage` can also be specified. An addition container will be run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd change the parameter names to mention just sidecar, not monitoring, to be more generic, although keep the scope of this chapter to be monitoring.
pkg/cluster/k8sres.go
Outdated
// TODO: figure out auth. | ||
/* | ||
{ | ||
Name: "POSTGRES_USER", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can expose them from the secrets, i.e.
ValueFrom: &v1.EnvVarSource{
SecretKeyRef: &v1.SecretKeySelector{
LocalObjectReference: v1.LocalObjectReference{
Name: c.credentialSecretName(c.OpConfig.SuperUsername),
},
Key: "password",
},
},
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done!
pkg/cluster/k8sres.go
Outdated
podSpec.Containers, | ||
v1.Container{ | ||
Name: "monitoring-sidecar", | ||
Image: c.monitoringImage(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like it is not defined anywhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yea this was a sketch and I was trying to figure out how to get my environment set up... all fixed up and working now.
okay so I tried to do something like that of course means that there's a bit more of a painful path to supporting things like adding custom volumes/mounts here, but I think this is probably a decent start. just let me know what you think! |
@theRealWardo thanks for updating the PR. I see little use in modifying individual cluster manifests. What if you run 100 clusters managed by the operator, would you ask each one of them to change its definition to incorporate the same container image (be it monitoring or log support)? Instead, we can pass this information via the operator configuration to configure all clusters managed by the operator at once and uniformly. It could be a parameter of the operator itself, i.e: sidecar_containers: "name:image, name:image, name:image". We could pass the necessary minimum of environment variables like credentials to those containers via the |
@alexeyklyukin while I do agree there is a use case for cluster wide sidecars, I would propose supporting both. I think that there is a use case for configuring an individual cluster's sidecars differently from other clusters' sidecars. I also believe this project has poor support for structured data in the operator config. how am I supposed to specify ports for example? the structured string in the operator config is extremely hard to read and therefore error prone. it further requires maintaining separate object definition from the standard Kubernetes objects, making the learning curve higher, and thus reducing potential traction of this operator. for these reasons, I avoided the operator config for now and have proposed a change to the cluster definition. |
@alexeyklyukin friendly ping. would love to get this or something like it in. we've been running this code in our staging environment fine for the last week or so. 😄 |
I see none. What is your use case for that?
Fair enough, I have some plans to address that. |
I am against merging it w/o support for the sidecars in the operator configuration. |
configmap. That has a number of limitations, i.e. when the configuration value is not a scalar, but a map or a list. We use a custom code based on github.com/kelseyhightower/envconfig to decode non-scalar values out of plain text keys, but that breaks when the data inside the keys contains both YAML-special elememtns (i.e. commas) and complex quotes, one good example for that is search_path inside `team_api_role_configuration`. In addition, reliance on the configmap forced a flag structure on the configuration, making it hard to write and to read (see #308 (comment)). This commit allows to supply the operator configuration in a proper YAML file. That required registering a custom CRD to support the operator configuration and provide an example at manifests/postgresql-operator-default-configuration.yaml. At the moment, both old configmap and the new CRD configuration is supported, so no compatibility issues, however, in the future I'd like to deprecate the configmap-based configuration altogether. Contrary to the configmap-based configuration, the CRD one doesn't embed defaults into the operator code, however, one can use the manifests/postgresql-operator-default-configuration.yaml as a starting point in order to build a custom configuration. Since previously `ReadyWaitInterval` and `ReadyWaitTimeout` parameters used to create the CRD were taken from the operator configuration, which is not possible if the configuration itself is stored in the CRD object, I've added the ability to specify them as environment variables `CRD_READY_WAIT_INTERVAL` and `CRD_READY_WAIT_TIMEOUT` respectively.
Looks like it is quite trivial to add global sidecars there, I did some quick-and-dirty work at https://github.com/zalando-incubator/postgres-operator/tree/global_and_per_cluster_sidecards (I also resolved the documentation conflict with the master in that PR). |
@theRealWardo I have created #331 based on your PR that adds support for global containers and does some refactoring. I have considered doing it against your own branch, but since the latest master is not merged in there the PR shows too many unrelated changes. I'd appreciate if you take a look. |
@theRealWardo thank you for your work on this, we merged your changes together with support for global sidecars in 25a3062. As a next step to that I see merging of #326 and, subsequently, providing richer operator-wide configuration for sidecars. |
* Up until now, the operator read its own configuration from the configmap. That has a number of limitations, i.e. when the configuration value is not a scalar, but a map or a list. We use a custom code based on github.com/kelseyhightower/envconfig to decode non-scalar values out of plain text keys, but that breaks when the data inside the keys contains both YAML-special elememtns (i.e. commas) and complex quotes, one good example for that is search_path inside `team_api_role_configuration`. In addition, reliance on the configmap forced a flag structure on the configuration, making it hard to write and to read (see #308 (comment)). The changes allow to supply the operator configuration in a proper YAML file. That required registering a custom CRD to support the operator configuration and provide an example at manifests/postgresql-operator-default-configuration.yaml. At the moment, both old configmap and the new CRD configuration is supported, so no compatibility issues, however, in the future I'd like to deprecate the configmap-based configuration altogether. Contrary to the configmap-based configuration, the CRD one doesn't embed defaults into the operator code, however, one can use the manifests/postgresql-operator-default-configuration.yaml as a starting point in order to build a custom configuration. Since previously `ReadyWaitInterval` and `ReadyWaitTimeout` parameters used to create the CRD were taken from the operator configuration, which is not possible if the configuration itself is stored in the CRD object, I've added the ability to specify them as environment variables `CRD_READY_WAIT_INTERVAL` and `CRD_READY_WAIT_TIMEOUT` respectively. Per review by @zerg-junior and @Jan-M.
* Up until now, the operator read its own configuration from the configmap. That has a number of limitations, i.e. when the configuration value is not a scalar, but a map or a list. We use a custom code based on github.com/kelseyhightower/envconfig to decode non-scalar values out of plain text keys, but that breaks when the data inside the keys contains both YAML-special elememtns (i.e. commas) and complex quotes, one good example for that is search_path inside `team_api_role_configuration`. In addition, reliance on the configmap forced a flag structure on the configuration, making it hard to write and to read (see zalando/postgres-operator#308 (comment)). The changes allow to supply the operator configuration in a proper YAML file. That required registering a custom CRD to support the operator configuration and provide an example at manifests/postgresql-operator-default-configuration.yaml. At the moment, both old configmap and the new CRD configuration is supported, so no compatibility issues, however, in the future I'd like to deprecate the configmap-based configuration altogether. Contrary to the configmap-based configuration, the CRD one doesn't embed defaults into the operator code, however, one can use the manifests/postgresql-operator-default-configuration.yaml as a starting point in order to build a custom configuration. Since previously `ReadyWaitInterval` and `ReadyWaitTimeout` parameters used to create the CRD were taken from the operator configuration, which is not possible if the configuration itself is stored in the CRD object, I've added the ability to specify them as environment variables `CRD_READY_WAIT_INTERVAL` and `CRD_READY_WAIT_TIMEOUT` respectively. Per review by @zerg-junior and @Jan-M.
see the discussion in #264