-
Notifications
You must be signed in to change notification settings - Fork 742
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Switch calico to be deployed with the Tigera operator #1297
Conversation
91e5de7
to
4a32b96
Compare
/cc @caseydavenport - can you please review this? |
config/master/calico.yaml
Outdated
format: int32 | ||
type: integer | ||
keepOriginalNextHop: | ||
default: false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tmjd could you remove this default: false
here?
It doesn't add any value since it matches the code default, and can cause issues applying the CRD as per: projectcalico/calico#4237
Thanks @caseydavenport :) |
* Switch calico to be deployed with the operator * Update operator update from v3.17.0 to v3.17.1 * Review update
* Switch calico to be deployed with the operator * Update operator update from v3.17.0 to v3.17.1 * Review update
Hey @tmjd, We noticed an issue while trying to apply the latest calico config file. It seems like we cannot install Calico with the newest version of the yaml file. However, we are able to install it on a cluster after installing the old version of the file, removing it, and then applying the latest version. This is the error on a new cluster (no prior calico installation) :
In order to install the latest version, we have to follow this path:
Do you know what could be causing this? |
There seems to be an inconsistent issue where it is necessary to apply the new calico.yaml twice. I don't believe removing should be necessary. |
@couralex6 I am working on a fix for that at the moment. You should be able to work around it by simply applying the calico.yaml twice for now. Here's the first PR in my fix: #1410 |
This switches the Calico install to be done using the Tigera operator. This PR includes a manifest to install the operator which will install Calico v3.17.
What type of PR is this?
update config?
Which issue does this PR fix:
What does this PR do / Why do we need it:
If an issue # is not available please add repro steps and logs from IPAMD/CNI showing the issue:
Testing done on this change:
I have tested upgrading a cluster that had a previous version of Calico (with Amazon VPC CNI) installed and also installed on a cluster that only had the Amazon VPC CNI plugin installed (with no existing Calico).
Automation added to e2e:
Will this break upgrades or downgrades. Has updating a running cluster been tested?:
Because of the way the upgrade will happen with the operator there is a problem upgrading on a small cluster, 3 nodes or less. This is because for 3 nodes or less the operator tries to deploy a typha for each node and the current calico install uses at least one typha and multiple typhas cannot run on a single node.
Once a cluster is upgraded with these changes, it will not be simple to downgrade back to a versions of Calico that was installed without the operator.
Does this change require updates to the CNI daemonset config files to work?:
This is updating the daemonset config files for Calico. A 'kubectl patch' of the image tag does not work because the change switches Calico to be installed by an oeprator.
Does this PR introduce any user-facing change?:
Users of Calico can now use
kubectl get tigerastatus
to get an overview status of the Calico installation. They also would have the installations.operator.tigera.io default resource available for making configuration changes.By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.