Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: retrieving groups from header values #306

Merged
merged 1 commit into from
Sep 29, 2023

Conversation

prometherion
Copy link
Member

Closes #305.

@msergg may I ask you to give a try to these changes to your environment? Unfortunately, I don't have an Active Directory backed environment: you need to build on your own and push to a container registry.

https://github.com/clastix/capsule-proxy/blob/193bdab39373d0dfa470ae09956dbb38f25bb069/Makefile#L20-L22

@msergg
Copy link

msergg commented Jun 21, 2023

I see changes in code
I'll check
Thanks

@msergg
Copy link

msergg commented Jun 22, 2023

@prometherion ,
I`ve checked this fix

its working in case , it impersonating groups
but in AD could be a lot groups per users(like 30-40)

"level":"Level(-4)","ts":"2023-06-22T07:32:59.272Z","logger":"proxy","msg":"impersonating for the current request","username":"u-qz4w2ddbmb","groups":["system:serviceaccounts","system:serviceaccounts:cattle-system","system:authenticated","activedirectory_group://CN=<example-cn>,OU=Vault_Stage,OU=Test Permissions Groups,OU=CorpGroups,DC=OFFICE,DC=CORP,DC=LOC","activedirectory_group://CN=<example-cn>,OU=Runcher,OU=Test Permissions Groups,OU=CorpGroups,DC=OFFICE,DC=CORP,DC=LOC","activedirectory_group://CN=<example-cn>,OU=Elasticsearch,OU=Test Permissions Groups,OU=CorpGroups,DC=OFFICE,DC=CORP,DC=LOC","activedirectory_group://CN=<example-cn>,OU=Elasticsearch,OU=Test Permissions Groups,OU=CorpGroups,DC=OFFICE,DC=CORP,DC=LOC","activedirectory_group://CN=<example-cn>,OU=Elasticsearch,OU=Test Permissions Groups,OU=CorpGroups,DC=OFFICE,DC=CORP,DC=LOC","activedirectory_group://CN=<example-cn>,OU=Elasticsearch,OU=Test Permissions Groups,OU=CorpGroups,DC=OFFICE,DC=CORP,DC=LOC","activedirectory_group://CN=<example-cn>,OU=Kafka,OU=Test Permissions Groups,OU=CorpGroups,DC=OFFICE,DC=CORP,DC=LOC","activedirectory_group://CN=<example-cn>,OU=Vault_Stage,OU=Test Permissions Groups,OU=CorpGroups,DC=OFFICE,DC=CORP,DC=LOC","activedirectory_group://CN=<example-cn>,OU=Kafka,OU=Test Permissions Groups,OU=CorpGroups,DC=OFFICE,DC=CORP,DC=LOC","activedirectory_group://CN=<example-cn>,OU=Kafka-stage,OU=Kafka,OU=Test Permissions Groups,OU=CorpGroups,DC=OFFICE,DC=CORP,DC=LOC","activedirectory_group://CN=<example-cn>,OU=Nexus,OU=Test Permissions Groups,OU=CorpGroups,DC=OFFICE,DC=CORP,DC=LOC","activedirectory_group://CN=<example-cn>,OU=Nexus,OU=Test Permissions Groups,OU=CorpGroups,DC=OFFICE,DC=CORP,DC=LOC","activedirectory_group://CN=<example-cn>,OU=Superset,OU=Test Permissions Groups,OU=CorpGroups,DC=OFFICE,DC=CORP,DC=LOC","activedirectory_group://CN=<example-cn>,OU=Nexus,OU=Test Permissions Groups,OU=CorpGroups,DC=OFFICE,DC=CORP,DC=LOC","activedirectory_group://CN=<example-cn>,OU=Jira Permissions Groups,OU=Test Distribution Groups,OU=CorpGroups,DC=OFFICE,DC=CORP,DC=LOC","activedirectory_group://CN=<example-cn>,OU=Kafka,OU=Test Permissions Groups,OU=CorpGroups,DC=OFFICE,DC=CORP,DC=LOC","activedirectory_group://CN=<example-cn>,OU=Services,OU=Identity,OU=Test Permissions Groups,OU=CorpGroups,DC=OFFICE,DC=CORP,DC=LOC","activedirectory_group://CN=<example-cn>,OU=Services,OU=Identity,OU=Test Permissions Groups,OU=CorpGroups,DC=OFFICE,DC=CORP,DC=LOC","activedirectory_group://CN=<example-cn>,OU=Services,OU=Identity,OU=Test Permissions Groups,OU=CorpGroups,DC=OFFICE,DC=CORP,DC=LOC","system:cattle:authenticated"],"uri":"/apis/elbv2.k8s.aws/v1alpha1"}

So after impersonating I getting rate limiting errors, and its stop working

I0622 07:34:39.736436       1 request.go:601] Waited for 1.02297946s due to client-side throttling, not priority and fairness, request: POST:https://172.20.0.1:443/apis/authentication.k8s.io/v1/tokenreviews
I0622 07:34:49.769582       1 request.go:601] Waited for 3.995919813s due to client-side throttling, not priority and fairness, request: POST:https://172.20.0.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews
I0622 07:34:59.820262       1 request.go:601] Waited for 4.546873818s due to client-side throttling, not priority and fairness, request: POST:https://172.20.0.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews
I0622 07:35:09.869764       1 request.go:601] Waited for 4.546539239s due to client-side throttling, not priority and fairness, request: POST:https://172.20.0.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews
2023/06/22 07:35:10 cannot retrieve user and group: client rate limiter Wait returned an error: context canceled
2023/06/22 07:35:10 cannot retrieve user and group: client rate limiter Wait returned an error: context canceled
2023/06/22 07:35:10 cannot retrieve user and group: client rate limiter Wait returned an error: context canceled
2023/06/22 07:35:10 cannot retrieve user and group: client rate limiter Wait returned an error: context canceled

Maybe some optimization needed? like use only group defined in tenant or filter unneccessary in some way?

@msergg
Copy link

msergg commented Jun 22, 2023

I am also try to use --ignored-user-group=
to filter all unneeded groups for test user, but it still stuck in rate limit errors

@y0psolo
Copy link

y0psolo commented Sep 22, 2023

Do you have any update on this pull request ?
It seems it will solve this issue #307.
It allows a correct behavior with groups handling on K8s.

For the rate limiting issue you encounter i was wondering if checking that impersonation is valid at capsule proxy is needed as as far as i understand, capsule proxy will forward this information to retrieve the needed resources to K8s that will check anyway that impersonation is valid. Am i wrong ?

@prometherion
Copy link
Member Author

Maybe some optimization needed? like use only group defined in tenant or filter unneccessary in some way?

I don't think it will solve the issue. Maybe it could be worth using the Flow Control feature, besides exposing the client settings.

I would stick only to one problem per time, so we can open a new issue to configure and fine-tune the client parameters.

@y0psolo @msergg if you confirm this fix works as expected, let's get it merged and focus on the rate-limiting thing: feel free to open an issue.

@bbusioc
Copy link

bbusioc commented Sep 27, 2023

I've tested for #307 and it not solving the issue.
what should I do about #307, as nobody is anymore answering on this issue?
is it because it was opened initially as a support request/question?
should I open a new one?

@prometherion
Copy link
Member Author

@bbusioc no need to open a new issue, this is open source and the support offered is based on a best-effort basis, and especially according to the maintainers availability.

I tried to replicate your issue with the proposed changes, and it worked for me with no errors at all, although using kubeconfig to make it easier.

$: kubectl config view --minify
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://127.0.0.1:9001
  name: kind-capsule
contexts:
- context:
    cluster: kind-capsule
    namespace: capsule-system
    user: kind-capsule
  name: kind-capsule
current-context: kind-capsule
kind: Config
preferences: {}
users:
- name: kind-capsule
  user:
    as: my_user
    as-groups:
    - group1
    - group2
    client-certificate-data: REDACTED
    client-key-data: REDACTED

$: kubectl get ns
NAME        STATUS   AGE
solar-dev   Active   14m
wind-dev    Active   13m

I tried to invert the groups:

$: kubectl config view --minify
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://127.0.0.1:9001
  name: kind-capsule
contexts:
- context:
    cluster: kind-capsule
    namespace: capsule-system
    user: kind-capsule
  name: kind-capsule
current-context: kind-capsule
kind: Config
preferences: {}
users:
- name: kind-capsule
  user:
    as: my_user
    as-groups:
    - group2
    - group1
    client-certificate-data: REDACTED
    client-key-data: REDACTED

$: kubectl get ns
NAME        STATUS   AGE
solar-dev   Active   14m
wind-dev    Active   14m

The Tenant manifests are the following:

apiVersion: v1
items:
- apiVersion: capsule.clastix.io/v1beta2
  kind: Tenant
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"capsule.clastix.io/v1beta2","kind":"Tenant","metadata":{"annotations":{},"name":"solar"},"spec":{"owners":[{"kind":"Group","name":"group2"}]}}
    creationTimestamp: "2023-09-29T13:37:03Z"
    generation: 1
    name: solar
    resourceVersion: "265661"
    uid: 95f1bf13-78b4-4042-8f5c-402a8fe0cb73
  spec:
    owners:
    - clusterRoles:
      - admin
      - capsule-namespace-deleter
      kind: Group
      name: group2
  status:
    namespaces:
    - solar-dev
    size: 1
    state: Active
- apiVersion: capsule.clastix.io/v1beta2
  kind: Tenant
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"capsule.clastix.io/v1beta2","kind":"Tenant","metadata":{"annotations":{},"name":"wind"},"spec":{"owners":[{"kind":"Group","name":"group1"}]}}
    creationTimestamp: "2023-09-29T13:37:50Z"
    generation: 1
    name: wind
    resourceVersion: "265730"
    uid: a92c659b-73a2-4c93-a224-e0143f435e4e
  spec:
    owners:
    - clusterRoles:
      - admin
      - capsule-namespace-deleter
      kind: Group
      name: group1
  status:
    namespaces:
    - wind-dev
    size: 1
    state: Active
kind: List
metadata:
  resourceVersion: ""

Furthermore, the proposed change is fetching multiple headers, so I think something is not working on your side, and I would keep the discussion addressing the specific issue.

@prometherion prometherion merged commit caf1836 into projectcapsule:master Sep 29, 2023
@prometherion prometherion deleted the issues/305 branch September 29, 2023 13:54
@prometherion
Copy link
Member Author

@msergg about the issue with the bursting and QPS, we're already exposing the CLI flags thanks to #301.

https://github.com/clastix/capsule-proxy/blob/caf1836b4a6ba7d3d7d6dc80318850aa251114bf/main.go#L87-L88

As I said, if these are not enough, please, feel free to open a feature request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Cannot impersonate user with active_directory group from rancher
4 participants