You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the solution you'd like
Hello, currently it seems you have to mount a SecretProviderClass to a pod before it starts creating Kubernetes secrets. Would it be possible to have a SecretProviderClass be synced even without a pod or a volume mount?
Our use case is that we want to sync our wildcard certificate once for all apps/ingresses in the namespace that use it, without having to pick a specific app that will do have the side effect of ensuring the certificate is present in the namespace. This also allows us to have one specific managed identity/workload identity accessing the key vault for that certificate, instead of having different apps with different identities all have access to the same key vault.
Thank you for your time. :)
Environment:
Secrets Store CSI Driver version: (use the image tag): v1.4.3
Azure Key Vault provider version: (use the image tag): v1.5.2
This issue is critical in our scenario. Application architecture and security practices enforces us from any application not keep/accept secrets or key via volume mount or pod env variable. Applications must read secret values from k8s secret directly using k8s APIs.
Describe the solution you'd like
Hello, currently it seems you have to mount a SecretProviderClass to a pod before it starts creating Kubernetes secrets. Would it be possible to have a SecretProviderClass be synced even without a pod or a volume mount?
Our use case is that we want to sync our wildcard certificate once for all apps/ingresses in the namespace that use it, without having to pick a specific app that will do have the side effect of ensuring the certificate is present in the namespace. This also allows us to have one specific managed identity/workload identity accessing the key vault for that certificate, instead of having different apps with different identities all have access to the same key vault.
Thank you for your time. :)
Environment:
kubectl version
): v1.25.9The text was updated successfully, but these errors were encountered: