You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
as a operator when managing hundreds of kubernetes auth with each kubernetes cluster as a distinct auth, managing these authentication configs becomes quite a tedious job. Currently to do multiple kube auth to one vault server with path seperation we do this (correct me if im wrong):
and we will get two seperate auth paths when doing a vault auth list if we were want to further granulize the path and namespace of each kubernetes auth, we need to do something like this
Is it possible to utilize what secrets engine have, where we do a auth enable to path dev/ and then no need to separately enable the auth path, and can directly write configs to the kubernetes host, and that all associated policy is listed based on the root auth path specified:( dev/) instead of individually specifying each kubernetes host path and treating it as a separate auth method?
The text was updated successfully, but these errors were encountered:
as a operator when managing hundreds of kubernetes auth with each kubernetes cluster as a distinct auth, managing these authentication configs becomes quite a tedious job. Currently to do multiple kube auth to one vault server with path seperation we do this (correct me if im wrong):
and we will get two seperate auth paths when doing a
vault auth list
if we were want to further granulize the path and namespace of each kubernetes auth, we need to do something like thisIs it possible to utilize what secrets engine have, where we do a auth enable to path
dev/
and then no need to separately enable the auth path, and can directly write configs to the kubernetes host, and that all associated policy is listed based on the root auth path specified:(dev/
) instead of individually specifying each kubernetes host path and treating it as a separate auth method?The text was updated successfully, but these errors were encountered: