Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

User: X does not have appropriate auth credentials in kubeconfig #1113

Closed
tzachiabo opened this issue Dec 4, 2022 · 6 comments
Closed

User: X does not have appropriate auth credentials in kubeconfig #1113

tzachiabo opened this issue Dec 4, 2022 · 6 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@tzachiabo
Copy link

tzachiabo commented Dec 4, 2022

Describe the bug
I am trying to connect to an AKS cluster using the sample code described in the readme
My AKS cluster have rbac enabled and AAD authentication with Azure RBAC
but I keep getting User: X does not have appropriate auth credentials in kubeconfig
On a different cluster with "Local accounts with Kubernetes RBAC" everything is working properly

Kubernetes C# SDK Client Version
9.0.38

Server Kubernetes Version
1.23.12

Dotnet Runtime Version
.net6

To Reproduce
use the sample code to connect to a k8s cluster:
`

      var azure = Microsoft.Azure.Management.Fluent.Azure
            .Authenticate(credentials)
            .WithSubscription(aksResource.SubscriptionId);

        var kubeConfigBytes = azure.KubernetesClusters.GetUserKubeConfigContents(
            aksResource.ResourceGroup,
            aksResource.Resource
        );

        var kubeConfigRaw = KubernetesClientConfiguration.LoadKubeConfig(new MemoryStream(kubeConfigBytes));
        var kubeConfig = KubernetesClientConfiguration.BuildConfigFromConfigObject(kubeConfigRaw);

        var client = new Kubernetes(kubeConfig);`

Expected behavior
k8s client object with no error

Where do you run your app with Kubernetes SDK (please complete the following information):

  • OS: Windows
  • Environment N/A
  • Cloud Azure

Additional context
Add any other context about the problem here.

@tg123
Copy link
Member

tg123 commented Dec 5, 2022

could you please az aks get-credentials --resource-group <rg> --name <name> -f -
to check what is inside the users section?

it should be something like below

users:
- name: xxxxxx
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - get-token
      - --environment
      - AzurePublicCloud
      - --server-id
      - xxxxx
      - --client-id
      - xxxx
      - --tenant-id
      - xxxxx
      - --login
      - devicecode
      command: kubelogin
      env: null
      provideClusterInfo: false

azure rbac calls https://github.com/Azure/kubelogin to login
but your err is likely due to no user context generated correctly

@jptissot
Copy link

I am not sure if this is the correct way, but I got a web-app to to work using Azure.Identity like this:

            var creds = new DefaultAzureCredential();

            // This scope is the "Azure Kubernetes Service AAD Server" app from Microsoft. (az ad sp show --id 6dae42f8-4368-4678-94ff-3960e28e3630)
            var token = await creds.GetTokenAsync(new Azure.Core.TokenRequestContext(new string[] { "6dae42f8-4368-4678-94ff-3960e28e3630/.default" }));

            var k8sconfig = new K8SConfiguration()
            {
                Clusters = new List<Cluster>()
                {
                    new Cluster()
                    {
                        Name = "spark-cluster",
                        ClusterEndpoint = new ClusterEndpoint()
                        {
                            Server = "https://aks-server-url:443",
                            CertificateAuthorityData = "b64encoded"
                        }
                    }
                },
                Contexts = new List<Context>() { new Context() { Name = "spark-cluster", ContextDetails = new ContextDetails() { Cluster = "spark-cluster" } } }
            };

            kubeConfig = KubernetesClientConfiguration.BuildConfigFromConfigObject(k8sconfig, currentContext: "spark-cluster");
            kubeConfig.AccessToken = token.Token;

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 13, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 12, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale May 12, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants