This project contains Ansible code that creates a baseline cluster in an existing Kubernetes environment for use with SAS Viya 4, generates the manifest for a SAS Viya software order, and then deploys that order into the specified Kubernetes environment. Here is a list of tasks that this tool can perform:
-
Prepare Kubernetes cluster
- Deploy ingress-nginx
- Deploy nfs-subdir-external-provisioner for PVs
- Deploy cert-manager for TLS
- Deploy metrics-server
- Manage storageClass settings
-
Deploy SAS Viya
- Retrieve the deployment assets using SAS Viya Orders CLI
- Retrieve cloud configuration from tfstate (if using a SAS Viya 4 IaC project)
- Run the kustomize process and deploy SAS Viya
- Create affinity rules such that processes are targeted to appropriately labeled nodes
- Create pod disruption budgets for each service such that cluster maintenance will not let the last instance of a service go down (during a node maintenance operation, for example)
- Use kustomize to mount user private (home) directories and data directories on CAS nodes and on compute server instances
- Deploy SAS Viya Monitoring for Kubernetes
- Deploy MPP or SMP CAS servers
-
Manage SAS Viya Deployments
- Organize and persist configuration for any number of SAS Viya deployments across namespaces, clusters, or cloud providers.
NOTE: These tools do not support a SAS Viya deployment with multi-tenancy enabled at this time. Support is planned for a future release.
Use of these tools requires operational knowledge of the following technologies:
- Ansible
- Docker
- Kubernetes
- Your selected cloud provider
The viya4-deployment playbook requires some infrastructure.
You can either bring your own Kubernetes cluster or use one of the SAS Viya 4 IaC projects to create a cluster using Terraform scripts:
A file server that uses the network file system (NFS) protocol is the minimum requirement for SAS Viya. You can either use one of the SAS Viya 4 IaC projects to create the required storage or bring your own Kubernetes storage. If you use the SAS Viya 4 IaC projects to create an NFS server VM and a jump box (bastion server) VM, the information can be passed in to viya4-deployment so that the required directory structures discussed in the next sections are created automatically. If you are bringing your own NFS-compliant server, the following NFS exports folder structure must be created prior to running viya4-deployment:
<export_dir> <- NFS export path
/pvs <- location for persistent volumes
/<namespace> <- folder per namespace
/bin <- location for open source directories
/data <- location for SAS and CAS Data
/homes <- location for user home directories to be mounted
/astores <- location for astores
The jump box or bastion server can manage NFS folders if you provide SSH access to it. The jump box must have the NFS storage mounted to it at <JUMP_SVR_RWX_FILESTORE_PATH>
. If you want to manage the NFS server yourself, the jump box is not required. Here is the required folder structure for the jump box:
<JUMP_SVR_RWX_FILESTORE_PATH> <- mounted NFS export
/pvs <- location for persistent volumes
/<namespace> <- folder per namespace
/bin <- location for open source directories
/data <- location for SAS and CAS data
/homes <- location for user home directories to be mounted
/astores <- location for ASTORES
Run the following commands in a terminal session:
# clone this repository
git clone https://github.com/sassoftware/viya4-deployment
# move to directory
cd viya4-deployment
See Ansible Cloud Authentication for details.
NOTE: At this time, additional setup is only required for GCP with external PostgreSQL.
The playbook uses Ansible variables for configuration. SAS recommends that you encrypt both this file and the other configuration files (sitedefault, kubeconfig, and tfstate) using Ansible Vault.
The Ansible vars.yaml file is the main configuration file. Create a file named ansible-vars.yaml to specify values for any input variables. Example variable definition files are provided in the ./examples
folder. For more details on the supported variables, refer to CONFIG-VARS.md.
The value is the path to a standard SAS Viya sitedefault file. If none is supplied, the example sitedefault.yaml file is used. A sitedefault file is not required for a SAS Viya deployment.
The Kubernetes access configuration file. If you used one of the SAS Viya 4 IaC projects to provision your cluster, this value is not required.
If you used a SAS Viya 4 IaC project to provision your cluster, you can provide the resulting tfstate file to have the kubeconfig and other settings auto-discovered. The ansible-vars-iac.yaml example file shows the values that must be set when using the SAS Viya 4 IaC integration.
The following information is parsed from the integration:
- Cloud
- PROVIDER
- PROVIDER_ACCOUNT
- CLUSTER_NAME
- Cloud NAT IP address
- RWX Filestore
- V4_CFG_RWX_FILESTORE_ENDPOINT
- V4_CFG_RWX_FILESTORE_PATH
- JumpBox
- JUMP_SVR_HOST
- JUMP_SVR_USER
- JUMP_SVR_RWX_FILESTORE_PATH
- Postgres
- V4_CFG_POSTGRES_SERVERS (if postgres deployed)
- Cluster
- KUBECONFIG
- V4_CFG_CLUSTER_NODE_POOL_MODE
- CLUSTER_AUTOSCALER_ACCOUNT
- CLUSTER_AUTOSCALER_LOCATION
- Ingress
- V4_CFG_INGRESS_MODE (from CLUSTER_API_MODE)
The Ansible playbook in viya4-deployment fully manages the kustomization.yaml file. Users can make changes by adding custom overlays into subfolders under the /site-config
folder. If this is the first time that you are running the playbook and plan to add customizations, create the following folder structure:
<base_dir> <- parent directory
/<cluster> <- folder per cluster
/<namespace> <- folder per namespace
/site-config <- location for all customizations
... <- folders containing user defined customizations
SAS Viya deployment customizations are automatically read in from folders under /site-config
. To provide customizations, first create the folder structure detailed in the Customize Deployment Overlays section above. Then copy the desired overlays into a subfolder under /site-config
. When you have completed these steps, you can run the viya4-deployment playbook. It will detect and add the overlays to the proper section of the kustomization.yaml file for the SAS Viya deployment.
Note: You do not need to modify the kustomization.yaml file. The playbook automatically adds the custom overlays to the kustomization.yaml file, based on the values you have specified.
For example:
/deployments
is the BASE_DIR- The target cluster is named demo-cluster
- The namespace will be named demo-ns
- Add in a custom overlay that modifies the CAS server
/deployments <- parent directory
/demo-cluster <- folder per cluster
/demo-ns <- folder per namespace
/site-config <- location for all customizations
/cas-server <- folder containing user defined customizations
/my_custom_overlay.yaml <- my custom overlay
If the embedded OpenLDAP server is enabled, it is possible to change the users and groups that will be created. The required steps are similar to other customizations:
- Create the folder structure detailed in the Customize Deployment Overlays.
- Copy the
./examples/openldap
folder into the/site-config
folder. - Locate the openldap-modify-users.yaml file in the
/openldap
folder. - Modify it to match the desired setup.
- Run the viya4-deployment playbook. It will use this setup when creating the OpenLDAP server.
Note: This method can only be used when you are first deploying the OpenLDAP server. Subsequently, you can either delete and redeploy the OpenLDAP server with a new configuration, or add users using ldapadd
.
For example:
/deployments
is the BASE_DIR- The cluster is named demo-cluster
- The namespace will be named demo-ns
- Add overlay with custom LDAP setup
/deployments <- parent directory
/demo-cluster <- folder per cluster
/demo-ns <- folder per namespace
/site-config <- location for all customizations
/openldap <- folder containing user defined customizations
/openldap-modify-users.yaml <- openldap overlay
Create and manage deployments using one of the following methods:
- running the Docker container (recommended)
- running Ansible directly on your workstation
During the installation, an ingress load balancer can be installed for SAS Viya and for the monitoring and logging stack. The host name for these services must be registered with your DNS provider in order to resolve to the LoadBalancer endpoint. This can be done by creating a record for each unique ingress controller host.
However, when you are managing multiple SAS Viya deployments, creating these records can be time-consuming. In such a case, SAS recommends creating a DNS record that points to the ingress controller's endpoint. The endpoint might be an IP address or FQDN, depending on the cloud provider. Take these steps:
- Create an A record or CNAME (depending on cloud provider) that resolves the desired host name to the LoadBalancer endpoint.
- Create a wildcard CNAME record that resolves to the record that you created in the previous step.
For example:
First, look up the ingress controller's LoadBalancer endpoint. The example below uses kubectl. This information can also be looked up in the cloud provider's admin portal.
$ kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.0.225.39 52.52.52.52 80:30603/TCP,443:32014/TCP 12d
ingress-nginx-controller-admission ClusterIP 10.0.99.105 <none> 443/TCP 12d
In the above example, the ingress controller's LoadBalancer endpoint is 52.52.52.52. So, we would create the following records:
- An A record (such as
example.com
) that points to the 52.52.52.52 address - A wildcard CNAME (
*.example.com
) that points to example.com
When running the viya
action with V4_CFG_CONNECT_ENABLE_LOADBALANCER=true
, a separate loadbalancer service is created to allow external SAS/CONNECT clients to connect to SAS Viya. You will need to register this LoadBalancer endpoint with your DNS provider such that the desired host name (for example, connect.example.com) points to the LoadBalancer endpoint.
See the Troubleshooting page.
We welcome your contributions! See CONTRIBUTING.md for details on how to submit contributions to this project.
This project is licensed under the Apache 2.0 License.