Epiphany supports following CNI plugins:
Flannel is a default setting in Epiphany configuration.
NOTE
Calico is not supported on Azure. To have an ability to use network policies, choose Canal.
Use the following configuration to set up an appropriate CNI plugin:
kind: configuration/kubernetes-master
name: default
specification:
advanced:
networking:
plugin: flannel
Currently, Epiphany provides the following predefined applications which may be deployed with epicli:
- rabbitmq
- pgpool
- pgbouncer
All of them have
default configuration.
The common parameters are: name, enabled, namespace, image_path and use_local_image_registry.
If you set use_local_image_registry
to false
in configuration manifest, you have to provide a valid docker image
path in image_path
. Kubernetes will try to pull image from image_path
value externally.
To see what version of the application image is in local image registry please refer
to components list.
Note: The above link points to develop branch. Please choose the right branch that suits to Epiphany version you are using.
-
Create
NodePort
service type for your application in Kubernetes. -
Make sure your service has statically assigned
nodePort
(a number between 30000-32767), for example 31234. More info here. -
Add configuration document for
load_balancer
/HAProxy
to your main config file.kind: configuration/haproxy title: "HAProxy" name: haproxy specification: frontend: - name: https_front port: 443 https: yes backend: - http_back1 backend: - name: http_back1 server_groups: - kubernetes_node port: 31234 provider: <your-provider-here-replace-it>
-
Run
epicli apply
.
Kubernetes that comes with Epiphany has an admin account created, you should consider creating more roles and accounts - especially when having many deployments running on different namespaces.
To know more about RBAC in Kubernetes use this link
When Kubernetes schedules a Pod, it’s important that the Containers have enough resources to actually run. If you schedule a large application on a node with limited resources, it is possible for the node to run out of memory or CPU resources and for things to stop working! It’s also possible for applications to take up more resources than they should.
When you specify a Pod, it is strongly recommended specifying how much CPU and memory (RAM) each Container needs. Requests are what the Container is guaranteed to get. If a Container requests a resource, Kubernetes will only schedule it on a node that can give it that resource. Limits make sure a Container never goes above a certain value. For more details about the difference between requests and limits, see Kubernetes docs
For more information, see the links below:
-
SSH into server, and forward port 8001 to your machine
ssh -i epi_keys/id_rsa [email protected] -L 8001:localhost:8001
NOTE: substitute IP with your cluster master's IP. -
On remote host: get admin token bearer:
kubectl describe secret $(kubectl get secrets --namespace=kube-system | grep admin-token | awk '{print $1}') --namespace=kube-system | grep -E '^token' | awk '{print $2}' | head -1
NOTE: save this token for next points. -
On remote host, open proxy to the dashboard
kubectl proxy
-
Now on your local machine navigate to
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
-
When prompted to put in credentials, use admin token from the previous point.
Audit logs are stored in /var/log/kubernetes/audit/
directory on control plane nodes.
There is a possibility to configure a rotation:
kind: configuration/kubernetes-master
title: Kubernetes Master Config
name: default
specification:
advanced:
api_server_args:
audit-log-maxbackup: 10
audit-log-maxsize: 200
Refer to K8s documentation to check the meaning of these values. The sample above shows the defaults.