Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert cluster.openshift.io/Network in to the NetworkConfig CRD; update Network status #47

Merged
merged 3 commits into from
Jan 15, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions Gopkg.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 0 additions & 4 deletions Gopkg.toml
Original file line number Diff line number Diff line change
Expand Up @@ -53,10 +53,6 @@ required = [
name = "k8s.io/code-generator"
non-go = false

[[constraint]]
name="github.com/openshift/api"
revision="2699ad42427b7e7b2cad1daefc93b632c9c0bb6c"

[[constraint]]
name = "github.com/Masterminds/sprig"
version = "^2"
Expand Down
70 changes: 59 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,35 +4,79 @@ The Cluster Network Operator installs and upgrades the networking components on

It follows the [Controller pattern](https://godoc.org/github.com/kubernetes-sigs/controller-runtime/pkg#hdr-Controller): it reconciles the state of the cluster against a desired configuration. The configuration specified by a CustomResourceDefinition called `networkoperator.openshift.io/NetworkConfig/v1`, which has a corresponding [type](/openshift/cluster-network-operator/blob/master/pkg/apis/networkoperator/v1/networkconfig_types.go).

Most users will be able to use the top-level OpenShift Config API, which has a [Network type](https://github.com/openshift/api/blob/master/config/v1/types_network.go#L26). The operator will automatically translate the `Network.config.openshift.io` object in to a `NetworkConfig.networkoperator.openshift.io`.

When the controller has reconciled and all its dependent resources have converged, the cluster should have an installed SDN plugin and a working service network. In OpenShift, the Cluster Network Operator runs very early in the install process -- while the boostrap API server is still running.

# Configuring
The network operator has a complex configuration, but most parameters have a sensible default.
The network operator gets its configuration from two objects: the Cluster and the Operator configuration. Most users only need to create the Cluster configuration - the operator will generate its configuration automatically. If you need finer-grained configuration of your network, you will need to create both configurations.

Any changes to the Cluster configuration are propagated down in to the Operator configuration. In the event of conflicts, the Operator configuration will be updated to match the Cluster configuration.

For example, if you want to use the default VXLAN port for OpenShiftSDN, then you don't need to do anything. However, if you need to customize that port, you will need to create both objects and set the port in the Operator config.


#### Configuration objects
*Cluster config*
- *Type Name*: `Network.config.openshift.io`
- *Instance Name*: `cluster`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I assume this name matches what other people are doing?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup. Once the installer switches from NetworkConfig to Network, we can align the names if we like.

- *View Command*: `oc get Network.config.openshift.io cluster -oyaml`

The configuration must be called "default".
*Operator config*
- *Type Name*: `NetworkConfig.networkoperator.openshift.io`
- *Instance Name*: `default`
- *View Command*: `oc get NetworkConfig.networkoperator.openshift.io default -oyaml`

A configuration with minimum parameters set:
#### Example configurations

*Cluster Config*
```yaml
apiVersion: "networkoperator.openshift.io/v1"
kind: "NetworkConfig"
apiVersion: config.openshift.io/v1
kind: Network
metadata:
name: "default"
name: cluster
spec:
serviceNetwork: "172.30.0.0/16"
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
networkType: OpenShiftSDN
serviceNetwork:
- 172.30.0.0/16
```

*Corresponding Operator Config*
This configuration is the auto-generated translation of the above Cluster configuration.
```yaml
apiVersion: networkoperator.openshift.io/v1
kind: NetworkConfig
metadata:
name: default
spec:
additionalNetworks: null
clusterNetworks:
- cidr: "10.128.0.0/14"
hostSubnetLength: 9
- cidr: 10.128.0.0/14
hostSubnetLength: 9
defaultNetwork:
type: OpenShiftSDN
serviceNetwork: 172.30.0.0/16
```

## Configuring IP address pools
Users must supply at least two address pools - one for pods, and one for services. These are the ClusterNetworks and ServiceNetwork parameter. Some network plugins, such as OpenShiftSDN, support multiple ClusterNetworks. All address blocks must be non-overlapping. You should select address pools large enough to fit your anticipated workload.

For future expansion, multiple `serviceNetwork` entries are allowed by the configuration but not actually supported by any network plugins. Supplying multiple addresses is invalid.

Each `clusterNetwork` entry has an additional required parameter, `hostPrefix`, that specifies the address size to assign to assign to each individual node. For example,
```yaml
cidr: 10.128.0.0/14
hostPrefix: 23
```
means nodes would get blocks of size `/23`, or 512 addresses.

IP address pools are always read from the Cluster configuration and propagated "downwards" in to the Operator configuration. Any changes to the Operator configuration will be ignored.

Currently, changing the address pools once set is not supported. In the future, some network providers may support expanding the address pools.

Each ClusterNetwork entry has an additional required parameter, `hostSubnetLength`, that specifies the address size to assign to assign to each individual node. Note that this is currently *reverse* from the usual CIDR notation - a hostSubnetLength of 9 means that the node will be assigned a /23.

Example
```yaml
Expand All @@ -48,6 +92,8 @@ spec:
## Configuring the default network provider
Users must select a default network provider. This cannot be changed. Different network providers have additional provider-specific settings.

The network type is always read from the Cluster configuration.

Currently, the only supported value for network Type is `OpenShiftSDN`.

### Configuring OpenShiftSDN
Expand All @@ -57,6 +103,8 @@ OpenShiftSDN supports the following configuration options, all of which are opti
* `MTU`: The MTU to use for the VXLAN overlay. The default is the MTU of the node that the cluster-network-operator is first run on, minus 50 bytes for overhead. If the nodes in your cluster don't all have the same MTU then you will need to set this explicitly.
* `useExternalOpenvswitch`: boolean. If the nodes are already running openvswitch, and OpenShiftSDN should not install its own, set this to true. This only needed for certain advanced installations with DPDK or OpenStack.

These configuration flags are only in the Operator configuration object.

Example:
```yaml
spec:
Expand All @@ -76,7 +124,7 @@ Users may customize the kube-proxy configuration. None of these settings are req
* `bindAddress`: The address to "bind" to - the address for which traffic will be redirected.
* `proxyArguments`: additional command-line flags to pass to kube-proxy - see the [documentation](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/).

Also, the top-level flag `deployKubeProxy` tells the network operator to explicitly deploy a kube-proxy process. Generally, you will not need to provide this; the operator will decide appropriately. For example, OpenShiftSDN includes an embedded service proxy, so this flag is automatically false in that case.
The top-level flag `deployKubeProxy` tells the network operator to explicitly deploy a kube-proxy process. Generally, you will not need to provide this; the operator will decide appropriately. For example, OpenShiftSDN includes an embedded service proxy, so this flag is automatically false in that case.

Example:

Expand Down
19 changes: 19 additions & 0 deletions manifests/0000_07_cluster-network-operator_01_crd.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,3 +14,22 @@ spec:
- name: v1
served: true
storage: true

---

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networks.config.openshift.io
spec:
group: config.openshift.io
names:
kind: Network
listKind: NetworkList
plural: networks
singular: network
scope: Cluster
versions:
- name: v1
served: true
storage: true
6 changes: 5 additions & 1 deletion pkg/controller/add_networkconfig.go
Original file line number Diff line number Diff line change
@@ -1,10 +1,14 @@
package controller

import (
"github.com/openshift/cluster-network-operator/pkg/controller/clusterconfig"
"github.com/openshift/cluster-network-operator/pkg/controller/networkconfig"
)

func init() {
// AddToManagerFuncs is a list of functions to create controllers and add them to a manager.
AddToManagerFuncs = append(AddToManagerFuncs, networkconfig.Add)
AddToManagerFuncs = append(AddToManagerFuncs,
networkconfig.Add,
clusterconfig.Add,
)
}
113 changes: 113 additions & 0 deletions pkg/controller/clusterconfig/clusterconfig_controller.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
package clusterconfig

import (
"context"
"log"

"github.com/pkg/errors"

configv1 "github.com/openshift/api/config/v1"
"github.com/openshift/cluster-network-operator/pkg/apply"
"github.com/openshift/cluster-network-operator/pkg/names"
"github.com/openshift/cluster-network-operator/pkg/network"

apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller"
"sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"sigs.k8s.io/controller-runtime/pkg/source"
)

// and Start it when the Manager is Started.
func Add(mgr manager.Manager) error {
return add(mgr, newReconciler(mgr))
}

// newReconciler returns a new reconcile.Reconciler
func newReconciler(mgr manager.Manager) reconcile.Reconciler {
configv1.Install(mgr.GetScheme())
return &ReconcileClusterConfig{client: mgr.GetClient(), scheme: mgr.GetScheme()}
}

// add adds a new Controller to mgr with r as the reconcile.Reconciler
func add(mgr manager.Manager, r reconcile.Reconciler) error {
// Create a new controller
c, err := controller.New("clusterconfig-controller", mgr, controller.Options{Reconciler: r})
if err != nil {
return err
}

// Watch for changes to primary resource config.openshift.io/v1/Network
err = c.Watch(&source.Kind{Type: &configv1.Network{}}, &handler.EnqueueRequestForObject{})
if err != nil {
return err
}

return nil
}

var _ reconcile.Reconciler = &ReconcileClusterConfig{}

// ReconcileClusterConfig reconciles a cluster Network object
type ReconcileClusterConfig struct {
// This client, initialized using mgr.Client() above, is a split client
// that reads objects from the cache and writes to the apiserver
client client.Client
scheme *runtime.Scheme
}

// Reconcile propagates changes from the cluster config to the operator config.
// In other words, it watches Network.config.openshift.io/v1/cluster and updates
// NetworkConfig.networkoperator.openshift.io/v1/default.
func (r *ReconcileClusterConfig) Reconcile(request reconcile.Request) (reconcile.Result, error) {
log.Printf("Reconciling Network.config.openshift.io %s\n", request.Name)

// We won't create more than one network
if request.Name != names.CLUSTER_CONFIG {
log.Printf("Ignoring Network without default name " + names.CLUSTER_CONFIG)
return reconcile.Result{}, nil
}

// Fetch the cluster config
clusterConfig := &configv1.Network{}
err := r.client.Get(context.TODO(), request.NamespacedName, clusterConfig)
if err != nil {
if apierrors.IsNotFound(err) {
// Request object not found, could have been deleted after reconcile request.
// Return and don't requeue
log.Println("Object seems to have been deleted")
return reconcile.Result{}, nil
}
// Error reading the object - requeue the request.
squeed marked this conversation as resolved.
Show resolved Hide resolved
log.Println(err)
return reconcile.Result{}, err
}

// Validate the cluster config
if err := network.ValidateClusterConfig(clusterConfig.Spec); err != nil {
err = errors.Wrapf(err, "failed to validate Network.Spec")
log.Println(err)
return reconcile.Result{}, err
}

operatorConfig, err := r.UpdateOperatorConfig(context.TODO(), *clusterConfig)
if err != nil {
err = errors.Wrapf(err, "failed to generate NetworkConfig CRD")
log.Println(err)
return reconcile.Result{}, err
}

if operatorConfig != nil {
if err := apply.ApplyObject(context.TODO(), r.client, operatorConfig); err != nil {
err = errors.Wrapf(err, "could not apply (%s) %s/%s", operatorConfig.GroupVersionKind(), operatorConfig.GetNamespace(), operatorConfig.GetName())
log.Println(err)
return reconcile.Result{}, err
}
}

log.Printf("successfully updated ClusterNetwork (%s) %s/%s", operatorConfig.GroupVersionKind(), operatorConfig.GetNamespace(), operatorConfig.GetName())
return reconcile.Result{}, nil
}
46 changes: 46 additions & 0 deletions pkg/controller/clusterconfig/config.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
package clusterconfig

import (
"context"
"reflect"

"github.com/pkg/errors"

configv1 "github.com/openshift/api/config/v1"
netopv1 "github.com/openshift/cluster-network-operator/pkg/apis/networkoperator/v1"
"github.com/openshift/cluster-network-operator/pkg/names"
"github.com/openshift/cluster-network-operator/pkg/network"
k8sutil "github.com/openshift/cluster-network-operator/pkg/util/k8s"

apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
uns "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/types"
)

// UpdateOperatorConfig merges the cluster network configuration in to the
// operator configuration.
// The operator's CRD is necessarily much more complicated, and 99% of users
// will not need to create or touch it. So, they can touch the Network config.
// Any changes in the cluster config will be noticed by the operator and merged
// in to the operator CRD.
// This returns nil if no changes have been detected.
func (r *ReconcileClusterConfig) UpdateOperatorConfig(ctx context.Context, clusterConfig configv1.Network) (*uns.Unstructured, error) {
operConfig := &netopv1.NetworkConfig{
TypeMeta: metav1.TypeMeta{APIVersion: "networkoperator.openshift.io/v1", Kind: "NetworkConfig"},
ObjectMeta: metav1.ObjectMeta{Name: names.OPERATOR_CONFIG},
}

err := r.client.Get(ctx, types.NamespacedName{Name: names.OPERATOR_CONFIG}, operConfig)
if err != nil && !apierrors.IsNotFound(err) {
return nil, errors.Wrapf(err, "could not retrieve networkoperator.openshift.io/NetworkConfig %s", names.OPERATOR_CONFIG)
}

newOperConfig := operConfig.DeepCopy()
network.MergeClusterConfig(&newOperConfig.Spec, clusterConfig.Spec)
if reflect.DeepEqual(newOperConfig.Spec, operConfig.Spec) {
return nil, nil
}

return k8sutil.ToUnstructured(newOperConfig)
}
Loading