Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v5.10.x] Avoid Packaging the NFS Server Provisioner #242

Closed
chirangaalwis opened this issue Jul 30, 2020 · 3 comments · Fixed by #244
Closed

[v5.10.x] Avoid Packaging the NFS Server Provisioner #242

chirangaalwis opened this issue Jul 30, 2020 · 3 comments · Fixed by #244

Comments

@chirangaalwis
Copy link
Member

chirangaalwis commented Jul 30, 2020

Description:
When testing the latest WSO2 Identity Server version 5.10.0 pattern 1 Helm chart with the Pipeline, it was noted that the deployments across multiple environments (in this case each environment is represented by a Kubernetes Namespace) fail due to multiple attempts to install/update the NFS server provisioner (packaged by default in WSO2 Helm charts) in the same Kubernetes cluster.

Please see the Spinnaker CloudDriver component error log mapping to this event.

ERROR 1 --- [tionProcessor-2] c.n.s.c.o.DefaultOrchestrationProcessor  : com.netflix.spinnaker.clouddriver.kubernetes.v2.op.job.KubectlJobExecutor$KubectlException: Deploy failed: The StorageClass "nfs" is invalid: provisioner: Forbidden: updates to provisioner are forbidden.

As you may know, currently we package the NFS Server Provisioner in the IAM Helm chart to allow evaluation users the opportunity to easily and dynamically provision the required Kubernetes Persistent Volume, for sharing and persistence of runtime artifacts such as, userstores. Ideally, a Kubernetes Storage Class requires one time installation for a Kubernetes cluster.

Thus, removal of the NFS Server Provisioner from the packaged product Helm chart is ideal.

Affected Product Version:
Helm Resources for WSO2 IAM version 5.10.x and above

Related Issue:
#234

@chirangaalwis
Copy link
Member Author

chirangaalwis commented Jul 30, 2020

Currently, we are planning to follow the following approach.

  • Avoid packaging the evaluatory NFS Server Provisioner in the IAM Helm chart

Considering the aforementioned issue, lack of production readiness of the NFS Server Provisioner and its incompatibility in some infrastructure based on recent experiences.

  • Provide the user options to choose to share/persist runtime artifacts (by default, kept disabled) and provide the desired persistent storage solution if enabled

This will not increase the number of steps to install the Helm chart if the user desires to persist/share the runtime artifacts.

But it would add an extra prerequisite of providing an appropriate Storage Class mapping to the desired storage solution, which as per experience is the most popular approach among users.

Plus, documenting the tried and tested storage solutions is of utmost importance.

  • Use documentation to strongly recommend sharing/persistence of the runtime artifacts in the long run in a production grade deployment

Please feel free to share your thoughts and concerns with regards to this matter.

@chirangaalwis
Copy link
Member Author

chirangaalwis commented Jul 30, 2020

In the IAM chart persistence and sharing of runtime artifacts can be achieved by setting the Helm input values properties,

  • wso2.deployment.persistentRuntimeArtifacts.sharedArtifacts.enabled to true
  • wso2.deployment.persistentRuntimeArtifacts.storageClass to the desired Kubernetes Storage Class

e.g. set the Kubernetes StorageClass to NFS Server Provisioner (using Helm version 3)

kubectl create ns <NAMESPACE>
helm install identity-server wso2/is-pattern-1 --set wso2.deployment.persistentRuntimeArtifacts.sharedArtifacts.enabled=true --set wso2.deployment.persistentRuntimeArtifacts.storageClass=nfs --namespace <NAMESPACE>

@chirangaalwis
Copy link
Member Author

Fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant