-
Notifications
You must be signed in to change notification settings - Fork 350
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple ReplicaSet Race Condition #2188
Comments
this is camel-k:1.3.1 |
The behavior goes away when we remove the configuration block which contains configuration:
- type: env
value: AWS_REGION=us-west-2 for the integration object. it creates one replicaset for each configuration entry. is there a better way to pass environment variables? |
I suspect there is another controller that applies conflicting changes to the Deployment. That would explain the To identify the other controller that can possibly change the Deployment, could you please provide its definition, by providing the output of Also, if this is possible for you, it could be interesting that you test on |
Here is the output YAML. As mentioned by @mjallday, if we get rid of the env variables (first snippet below), then the system starts with just one replica set. 0 or 1 values in - type: env
value: AWS_REGION=us-west-2 Here is the output of apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "30"
creationTimestamp: "2021-04-01T20:35:58Z"
generation: 33
labels:
camel.apache.org/generation: "1"
camel.apache.org/integration: processor-mj
name: processor-mj
namespace: camel-k
ownerReferences:
- apiVersion: camel.apache.org/v1
blockOwnerDeletion: true
controller: true
kind: Integration
name: processor-mj
uid: 68f161fb-6b35-47fc-ac3f-de77ed6138b1
resourceVersion: "114176131"
selfLink: /apis/apps/v1/namespaces/camel-k/deployments/processor-mj
uid: 1f8102a7-a3ef-4af9-85bf-2d3a9a139988
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
camel.apache.org/integration: processor-mj
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
camel.apache.org/integration: processor-mj
spec:
containers:
- args:
- ...
command:
- /bin/sh
- -c
env:
- name: SAMPLE_ENV_NAME
value: "SAMPLE_ENV_VALUE"
- name: SAMPLE_ENV_NAME
value: "SAMPLE_ENV_VALUE"
- name: SAMPLE_ENV_NAME
value: "SAMPLE_ENV_VALUE"
- name: SAMPLE_ENV_NAME
value: "SAMPLE_ENV_VALUE"
- name: SAMPLE_ENV_NAME
value: "SAMPLE_ENV_VALUE"
- name: SAMPLE_ENV_NAME
value: "SAMPLE_ENV_VALUE"
- name: SAMPLE_ENV_NAME
value: "SAMPLE_ENV_VALUE"
- name: SAMPLE_ENV_NAME
value: "SAMPLE_ENV_VALUE"
- name: SAMPLE_ENV_NAME
value: "SAMPLE_ENV_VALUE"
- name: SAMPLE_ENV_NAME
value: "SAMPLE_ENV_VALUE"
- name: SAMPLE_ENV_NAME
value: "SAMPLE_ENV_VALUE"
- name: SAMPLE_ENV_NAME
value: "SAMPLE_ENV_VALUE"
- name: SAMPLE_ENV_NAME
value: "SAMPLE_ENV_VALUE"
- name: SAMPLE_ENV_NAME
value: "SAMPLE_ENV_VALUE"
- name: SAMPLE_ENV_NAME
value: "SAMPLE_ENV_VALUE"
- name: SAMPLE_ENV_NAME
value: "SAMPLE_ENV_VALUE"
- name: SAMPLE_ENV_NAME
value: "SAMPLE_ENV_VALUE"
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
image: quay.io/...
imagePullPolicy: IfNotPresent
name: integration
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/camel/sources/i-source-000
name: i-source-000
readOnly: true
- mountPath: /etc/camel/conf
name: application-properties
readOnly: true
workingDir: /deployments
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: camel-k
serviceAccountName: camel-k
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
items:
- key: content
path: process.groovy
name: processor-mj-source-000
name: i-source-000
- configMap:
defaultMode: 420
items:
- key: application.properties
path: application.properties
name: processor-mj-application-properties
name: application-properties
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2021-04-01T20:36:07Z"
lastUpdateTime: "2021-04-01T20:36:07Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2021-04-01T20:35:59Z"
lastUpdateTime: "2021-04-02T09:27:01Z"
message: ReplicaSet "processor-mj-d457745f9" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 33
readyReplicas: 1
replicas: 1
updatedReplicas: 1 |
@fshaikh-vgs thanks for the details. If I understand your use case correctly, this is the expected behaviour of the Kubernetes Deployment controller. When the Integration is updated with an extra environment variable, the Camel K operator adds it to the Deployment. Then Kubernetes triggers a rollout deployment, that creates a new ReplicaSet. This is described in details in https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment. In order to avoid that behaviour, it is possible to pause the Deployment, to apply a sequence of updates, without creating a new ReplicaSet for each one, then resume the Deployment. This is documented in https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#pausing-and-resuming-a-deployment. That being said, I'm not sure it'll work for the Deployment created and managed by the Camel K operator, and owned by the Integration. |
@astefanutti thanks for the quick response! but it is not updated with an extra environment variable, all the variables are provided at once in one file.. why can't camel K apply it once? |
Ah I misunderstood it. Thanks for the clarification, now I think I got this right. So I think it's fixed with #2039, that introduces server-side apply to manage the Integration Deployment. The underlying issue lies in the way we process environment variables as a Even if that is fixed with #2039, as a safe measure, I think it's still important that we avoid processing these environment variables as |
We're trying to spin up a camel-k integration and there's some sort of race condition creating replicasets as it's starting. Here's how we're doing this.
which we're applying with this command
kubectl -n camel-k apply -f config.yaml
when this happens I see the integration created successfully
and the deployment looks like this
and then there's a ton of replica sets created
Is this expected behavior? The integration file doesn't appear to let us control how scaling, I'm looking for it to create a single replicaset with a single pod or if i scale it to maintain a single replicaset with multiple pods but it seems to spam replicasets for a while before setting down.
If i look at the replicaset i see a bunch of scaling events but i'm unsure why
kubectl -n camel-k describe rs/processor-mj-7f9758bb7f
any hints on how we can stop this behavior?
The text was updated successfully, but these errors were encountered: