Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

README: production user should change mac range #191

Merged
merged 1 commit into from
Jul 1, 2020

Conversation

dankenigsberg
Copy link
Contributor

@dankenigsberg dankenigsberg commented Jun 15, 2020

It is important to set a distinct seemingly-random range for each cluster.
If nearby clusters are deployed with the same range and using Layer 2 CNIs, inter-cluster collision would be likely.

This PR states this explicitly and provides bash code to do that. While at it, this PR also makes the relevant section more readable.

Signed-off-by: Dan Kenigsberg [email protected]

What this PR does / why we need it:

Special notes for your reviewer:

Release note:

NONE

It is important to set a distinct seemingly-random range for each cluster.
If nearby clusters are deployed with the same range and using Layer 2 CNIs, inter-cluster collision would be likely.

This PR states this explicitly and provides bash code to do that. While at it, this PR also makes the relevant section more readable.

Signed-Off-By: Dan Kenigsberg <[email protected]>
@kubevirt-bot
Copy link
Collaborator

Hi @dankenigsberg. Thanks for your PR.

I'm waiting for a k8snetworkplumbingwg member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@phoracek
Copy link
Member

/ok-to-test

For example [Multus](https://github.com/intel/multus-cni).
To deploy multus on a kubernetes cluster with flannel cni.
* Install any supported [Network Plumbing Working Group de-facto standard](https://github.com/k8snetworkplumbingwg/multi-net-spec) implementation, for example [Multus](https://github.com/intel/multus-cni).
To deploy multus on a kubernetes cluster with flannel cni:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should consider use another CNI in this example (e.g., Calico), to prevent readers from encountering compatibility issues with Flannel and Kubernetes (1.16 and above).
There is an opened issue about it in Flannel repo [1].

[1] flannel-io/flannel#1245

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hope this can wait for another PR. This PR is only about setting a random-like mac range.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure no problem :)

@ormergi
Copy link
Collaborator

ormergi commented Jun 22, 2020

/lgtm

For example [Multus](https://github.com/intel/multus-cni).
To deploy multus on a kubernetes cluster with flannel cni.
* Install any supported [Network Plumbing Working Group de-facto standard](https://github.com/k8snetworkplumbingwg/multi-net-spec) implementation, for example [Multus](https://github.com/intel/multus-cni).
To deploy multus on a kubernetes cluster with flannel cni:
```bash
kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/kubemacpool/master/hack/multus/kubernetes-multus.yaml
kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/kubemacpool/master/hack/multus/multus.yaml
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to refresh this, but that's for another PR.

hades01:kubemacpool $ kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/kubemacpool/master/hack/multus/multus.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
customresourcedefinition.apiextensions.k8s.io/network-attachment-definitions.k8s.cni.cncf.io configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrole.rbac.authorization.k8s.io/multus configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/multus configured
serviceaccount/multus created
error: unable to recognize "https://raw.githubusercontent.com/k8snetworkplumbingwg/kubemacpool/master/hack/multus/multus.yaml": no matches for kind "DaemonSet" in version "extensions/v1beta1"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added an issue #205

@RamLavi
Copy link
Member

RamLavi commented Jun 22, 2020

/lgtm
/approve

@kubevirt-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: RamLavi

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@phoracek
Copy link
Member

phoracek commented Jul 1, 2020

/release-note-none

@kubevirt-bot kubevirt-bot merged commit a76d269 into k8snetworkplumbingwg:master Jul 1, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants