Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

K8s 1.16 compatibility: statefulset moved to apps/v1 #674

Closed
valer-cara opened this issue Sep 24, 2019 · 7 comments · Fixed by #675
Closed

K8s 1.16 compatibility: statefulset moved to apps/v1 #674

valer-cara opened this issue Sep 24, 2019 · 7 comments · Fixed by #675
Labels

Comments

@valer-cara
Copy link
Contributor

valer-cara commented Sep 24, 2019

The current generateStatefulSet() uses the apps/v1beta1 api which has been deprecated in kubernetes 1.16.

The controller is thus unable to instantiate the cluster.

level=error msg="could not sync cluster: could not sync statefulsets: could not create missing statefulset: the server could not find the requested resource"
@FxKu FxKu added the bug label Sep 24, 2019
@FxKu
Copy link
Member

FxKu commented Sep 24, 2019

Thanks for noticing. Have to check if replacing v1beta is easily possible. At least apps/v1 is available for a while now.

@valer-cara
Copy link
Contributor Author

my only concern was backwards compat. afaik the controller only updates the STS without affecting the existing pods, so it'd be safe.

@Bryji
Copy link

Bryji commented Sep 30, 2019

Is it confirmed this is seamless upgrade - operator will update the statefulset leaving the pods intact?

@Keithsc
Copy link

Keithsc commented Oct 10, 2019

I am having trouble getting the Postgres operator to run on k8s v 1.16.0 or 1.16.1 although it runs fine for me on 1.15.3. Has anyone had any success run it on v1.16.x ?

@philippfreyer
Copy link

philippfreyer commented Oct 14, 2019

@Keithsc I asked on the Slack channel and got the answer that the following docker image should already contain the fix and work - it did for me:
image: registry.opensource.zalan.do/acid/postgres-operator:v1.2.0-15-gbb855fd-dirty

Patching the deployment yaml file with this image:
sed -e "s#image: registry.opensource.zalan.do/acid/postgres-operator:v1.2.0#image: registry.opensource.zalan.do/acid/postgres-operator:v1.2.0-15-gbb855fd-dirty#g" manifests/postgres-operator.yaml | kubectl -n postgres-operator create -f -

@colixxx
Copy link

colixxx commented Nov 20, 2019

I am having trouble getting the Postgres operator to run on k8s v 1.15.0(my kubernetes cluster) or 1.16.2 (my Minikube).

kubectl create -f manifests/postgres-operator.yaml

2019/11/20 15:11:51 Fully qualified configmap name: default/postgres-operator
time="2019-11-20T15:11:51Z" level=warning msg="in the operator config map, the pod service account name zalando-postgres-operator does not match the name operator given in the account definition; using the former for consistency" pkg=controller
time="2019-11-20T15:11:51Z" level=info msg="Parse role bindings" pkg=controller
time="2019-11-20T15:11:51Z" level=info msg="successfully parsed" pkg=controller
time="2019-11-20T15:11:51Z" level=info msg="Listening to all namespaces" pkg=controller
time="2019-11-20T15:11:51Z" level=info msg="customResourceDefinition \"postgresqls.acid.zalan.do\" has been registered" pkg=controller
time="2019-11-20T15:11:59Z" level=warning msg="in the operator config map, the pod service account name zalando-postgres-operator does not match the name operator given in the account definition; using the former for consistency" pkg=controller
time="2019-11-20T15:11:59Z" level=info msg="config: {\n\t\"ReadyWaitInterval\": 3000000000,\n\t\"ReadyWaitTimeout\": 30000000000,\n\t\"ResyncPeriod\": 300000000000,\n\t\"RepairPeriod\": 300000000000,\n\t\"ResourceCheckInterval\": 3000000000,\n\t\"ResourceCheckTimeout\": 600000000000,\n\t\"PodLabelWaitTimeout\": 600000000000,\n\t\"PodDeletionWaitTimeout\": 600000000000,\n\t\"SpiloFSGroup\": null,\n\t\"PodPriorityClassName\": \"\",\n\t\"ClusterDomain\": \"cluster.local\",\n\t\"SpiloPrivileged\": false,\n\t\"ClusterLabels\": {\n\t\t\"application\": \"spilo\"\n\t},\n\t\"InheritedLabels\": null,\n\t\"ClusterNameLabel\": \"version\",\n\t\"PodRoleLabel\": \"spilo-role\",\n\t\"PodToleration\": null,\n\t\"DefaultCPURequest\": \"100m\",\n\t\"DefaultMemoryRequest\": \"100Mi\",\n\t\"DefaultCPULimit\": \"3\",\n\t\"DefaultMemoryLimit\": \"1Gi\",\n\t\"PodEnvironmentConfigMap\": \"\",\n\t\"NodeReadinessLabel\": null,\n\t\"MaxInstances\": -1,\n\t\"MinInstances\": -1,\n\t\"ShmVolume\": true,\n\t\"SecretNameTemplate\": \"{username}.{cluster}.credentials\",\n\t\"PamRoleName\": \"zalandos\",\n\t\"PamConfiguration\": \"https://info.example.com/oauth2/tokeninfo?access_token= uid realm=/employees\",\n\t\"TeamsAPIUrl\": \"https://teams.example.com/api/\",\n\t\"OAuthTokenSecretName\": \"default/postgresql-operator\",\n\t\"InfrastructureRolesSecretName\": \"/\",\n\t\"SuperUsername\": \"postgres\",\n\t\"ReplicationUsername\": \"standby\",\n\t\"ScalyrAPIKey\": \"\",\n\t\"ScalyrImage\": \"\",\n\t\"ScalyrServerURL\": \"https://upload.eu.scalyr.com\",\n\t\"ScalyrCPURequest\": \"100m\",\n\t\"ScalyrMemoryRequest\": \"50Mi\",\n\t\"ScalyrCPULimit\": \"1\",\n\t\"ScalyrMemoryLimit\": \"1Gi\",\n\t\"LogicalBackupSchedule\": \"30 00 * * *\",\n\t\"LogicalBackupDockerImage\": \"registry.opensource.zalan.do/acid/logical-backup\",\n\t\"LogicalBackupS3Bucket\": \"\",\n\t\"WatchedNamespace\": \"\",\n\t\"EtcdHost\": \"\",\n\t\"DockerImage\": \"registry.opensource.zalan.do/acid/spilo-11:1.6-p1\",\n\t\"Sidecars\": null,\n\t\"PodServiceAccountName\": \"zalando-postgres-operator\",\n\t\"PodServiceAccountDefinition\": \"\\n\\t\\t{ \\\"apiVersion\\\": \\\"v1\\\",\\n\\t\\t  \\\"kind\\\": \\\"ServiceAccount\\\",\\n\\t\\t  \\\"metadata\\\": {\\n\\t\\t\\t\\t \\\"name\\\": \\\"operator\\\"\\n\\t\\t   }\\n\\t\\t}\",\n\t\"PodServiceAccountRoleBindingDefinition\": \"\\n\\t\\t{\\n\\t\\t\\t\\\"apiVersion\\\": \\\"rbac.authorization.k8s.io/v1beta1\\\",\\n\\t\\t\\t\\\"kind\\\": \\\"RoleBinding\\\",\\n\\t\\t\\t\\\"metadata\\\": {\\n\\t\\t\\t\\t   \\\"name\\\": \\\"zalando-postgres-operator\\\"\\n\\t\\t\\t},\\n\\t\\t\\t\\\"roleRef\\\": {\\n\\t\\t\\t\\t\\\"apiGroup\\\": \\\"rbac.authorization.k8s.io\\\",\\n\\t\\t\\t\\t\\\"kind\\\": \\\"ClusterRole\\\",\\n\\t\\t\\t\\t\\\"name\\\": \\\"zalando-postgres-operator\\\"\\n\\t\\t\\t},\\n\\t\\t\\t\\\"subjects\\\": [\\n\\t\\t\\t\\t{\\n\\t\\t\\t\\t\\t\\\"kind\\\": \\\"ServiceAccount\\\",\\n\\t\\t\\t\\t\\t\\\"name\\\": \\\"zalando-postgres-operator\\\"\\n\\t\\t\\t\\t}\\n\\t\\t\\t]\\n\\t\\t}\",\n\t\"MasterPodMoveTimeout\": 1200000000000,\n\t\"DbHostedZone\": \"db.example.com\",\n\t\"AWSRegion\": \"eu-central-1\",\n\t\"WALES3Bucket\": \"\",\n\t\"LogS3Bucket\": \"\",\n\t\"KubeIAMRole\": \"\",\n\t\"AdditionalSecretMount\": \"\",\n\t\"AdditionalSecretMountPath\": \"/meta/credentials\",\n\t\"DebugLogging\": true,\n\t\"EnableDBAccess\": true,\n\t\"EnableTeamsAPI\": false,\n\t\"EnableTeamSuperuser\": false,\n\t\"TeamAdminRole\": \"admin\",\n\t\"EnableAdminRoleForUsers\": true,\n\t\"EnableMasterLoadBalancer\": false,\n\t\"EnableReplicaLoadBalancer\": false,\n\t\"CustomServiceAnnotations\": null,\n\t\"EnablePodAntiAffinity\": false,\n\t\"PodAntiAffinityTopologyKey\": \"kubernetes.io/hostname\",\n\t\"EnableLoadBalancer\": null,\n\t\"MasterDNSNameFormat\": \"{cluster}.{team}.staging.{hostedzone}\",\n\t\"ReplicaDNSNameFormat\": \"{cluster}-repl.{team}.staging.{hostedzone}\",\n\t\"PDBNameFormat\": \"postgres-{cluster}-pdb\",\n\t\"EnablePodDisruptionBudget\": true,\n\t\"Workers\": 4,\n\t\"APIPort\": 8080,\n\t\"RingLogLines\": 100,\n\t\"ClusterHistoryEntries\": 1000,\n\t\"TeamAPIRoleConfiguration\": {\n\t\t\"log_statement\": \"all\"\n\t},\n\t\"PodTerminateGracePeriod\": 300000000000,\n\t\"PodManagementPolicy\": \"ordered_ready\",\n\t\"ProtectedRoles\": [\n\t\t\"admin\"\n\t],\n\t\"PostgresSuperuserTeams\": null,\n\t\"SetMemoryRequestToLimit\": false\n}" pkg=controller
time="2019-11-20T15:11:59Z" level=debug msg="acquiring initial list of clusters" pkg=controller
time="2019-11-20T15:11:59Z" level=info msg="no clusters running" pkg=controller
time="2019-11-20T15:11:59Z" level=info msg="started working in background" pkg=controller
time="2019-11-20T15:11:59Z" level=info msg="listening on :8080" pkg=apiserver
time="2019-11-20T15:11:59Z" level=debug msg="new node has been added: \"/\" ()" pkg=controller
time="2019-11-20T15:11:59Z" level=debug msg="new node has been added: \"/\" ()" pkg=controller
time="2019-11-20T15:11:59Z" level=debug msg="new node has been added: \"/\" ()" pkg=controller

After I deployment PostgreSQL kubectl create -f manifests/postgres-operator.yaml

time="2019-11-20T15:13:27Z" level=info msg="\"ADD\" event has been queued" cluster-name=default/acid-minimal-cluster pkg=controller worker=0
time="2019-11-20T15:13:27Z" level=info msg="creation of the cluster started" cluster-name=default/acid-minimal-cluster pkg=controller worker=0
time="2019-11-20T15:13:27Z" level=info msg="endpoint \"default/acid-minimal-cluster\" has been successfully created" cluster-name=default/acid-minimal-cluster pkg=cluster worker=0
time="2019-11-20T15:13:27Z" level=info msg="master service \"default/acid-minimal-cluster\" has been successfully created" cluster-name=default/acid-minimal-cluster pkg=cluster worker=0
time="2019-11-20T15:13:27Z" level=debug msg="No load balancer created for the replica service" cluster-name=default/acid-minimal-cluster pkg=cluster worker=0
time="2019-11-20T15:13:27Z" level=info msg="replica service \"default/acid-minimal-cluster-repl\" has been successfully created" cluster-name=default/acid-minimal-cluster pkg=cluster worker=0
time="2019-11-20T15:13:27Z" level=debug msg="team API is disabled, returning empty list of members for team \"ACID\"" cluster-name=default/acid-minimal-cluster pkg=cluster worker=0
time="2019-11-20T15:13:27Z" level=info msg="users have been initialized" cluster-name=default/acid-minimal-cluster pkg=cluster worker=0
time="2019-11-20T15:13:27Z" level=debug msg="created new secret \"default/foo-user.acid-minimal-cluster.credentials\", uid: \"d0f8d42c-ffd8-4bab-9b39-a7c82b6c398a\"" cluster-name=default/acid-minimal-cluster pkg=cluster worker=0
time="2019-11-20T15:13:27Z" level=debug msg="created new secret \"default/zalando.acid-minimal-cluster.credentials\", uid: \"183857ee-1fbb-4d79-a562-bd32f51ddb46\"" cluster-name=default/acid-minimal-cluster pkg=cluster worker=0
time="2019-11-20T15:13:27Z" level=debug msg="created new secret \"default/postgres.acid-minimal-cluster.credentials\", uid: \"f6f5cd0d-b0c7-4ae8-807e-d836f70c0108\"" cluster-name=default/acid-minimal-cluster pkg=cluster worker=0
time="2019-11-20T15:13:27Z" level=debug msg="created new secret \"default/standby.acid-minimal-cluster.credentials\", uid: \"02a15cbb-a96a-4573-8357-2ef8f6965bc3\"" cluster-name=default/acid-minimal-cluster pkg=cluster worker=0
time="2019-11-20T15:13:27Z" level=info msg="secrets have been successfully created" cluster-name=default/acid-minimal-cluster pkg=cluster worker=0
time="2019-11-20T15:13:27Z" level=info msg="pod disruption budget \"default/postgres-acid-minimal-cluster-pdb\" has been successfully created" cluster-name=default/acid-minimal-cluster pkg=cluster worker=0
time="2019-11-20T15:13:27Z" level=debug msg="Generating Spilo container, environment variables: [{SCOPE acid-minimal-cluster nil} {PGROOT /home/postgres/pgdata/pgroot nil} {POD_IP  &EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {POD_NAMESPACE  &EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {PGUSER_SUPERUSER postgres nil} {KUBERNETES_SCOPE_LABEL version nil} {KUBERNETES_ROLE_LABEL spilo-role nil} {KUBERNETES_LABELS application=spilo nil} {PGPASSWORD_SUPERUSER  &EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:postgres.acid-minimal-cluster.credentials,},Key:password,Optional:nil,},}} {PGUSER_STANDBY standby nil} {PGPASSWORD_STANDBY  &EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:standby.acid-minimal-cluster.credentials,},Key:password,Optional:nil,},}} {PAM_OAUTH2 https://info.example.com/oauth2/tokeninfo?access_token= uid realm=/employees nil} {HUMAN_ROLE zalandos nil} {SPILO_CONFIGURATION {\"postgresql\":{\"bin_dir\":\"/usr/lib/postgresql/10/bin\"},\"bootstrap\":{\"initdb\":[{\"auth-host\":\"md5\"},{\"auth-local\":\"trust\"}],\"users\":{\"zalandos\":{\"password\":\"\",\"options\":[\"CREATEDB\",\"NOLOGIN\"]}},\"dcs\":{}}} nil} {DCS_ENABLE_KUBERNETES_API true nil}]" cluster-name=default/acid-minimal-cluster pkg=cluster worker=0
time="2019-11-20T15:13:27Z" level=error msg="could not create cluster: could not create statefulset: the server could not find the requested resource" cluster-name=default/acid-minimal-cluster pkg=controller worker=0

@Flowkap
Copy link

Flowkap commented Feb 12, 2020

I did not find any migration guide. Upgrading the Operator is working fine. But the existing Statefulsets are NOT changed (recreated) .... whats the desired migration path?


Ok ... after a lot of confusion were we saw even newly created ones had app/v1beta2 ... we looked again. Stupidly enough only the KUbernetes dashboard printed v1beta2 even though it was app/v1 ... did not check on kubectl :( and as we'Re migrating on Kubernetes 1.15.3 of course it was still the old K8 dashboard.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants