-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mock Tests failing in kubernetes-tests/
regarding authorization.openshift.io
resources
#3152
Comments
Looks like I was getting these failures because I had my crc OpenShift cluster running locally. When I stopped the cluster, the tests started passing. |
My OpenShift tests fail whenever my This is really bad, and we should provide a fix, tests in |
…hInterceptor Right now we have a check to retry request in OpenShiftOAuthInterceptor if it's in apiGroup authorization.openshift.io/authorization.k8s.io . This is creating problems for regular resources in this apiGroup like Role, ClusterRole, RoleBinding, ClusterRoleBinding etc. For example, doing create() when oAuthToken is already initialized would always be called twice due to this retry logic. This will result in HTTP_CONFLICT from server. Instead of checking apiGroup, we should retry only for these Non Restful resources in request url LocalSubjectAccessReview, LocalResourceAccessReview ResourceAccessReview, SelfSubjectRulesReview, SubjectRulesReview, SubjectAccessReview SelfSubjectAccessReview
…hInterceptor Right now we have a check to retry request in OpenShiftOAuthInterceptor if it's in apiGroup authorization.openshift.io/authorization.k8s.io . This is creating problems for regular resources in this apiGroup like Role, ClusterRole, RoleBinding, ClusterRoleBinding etc. For example, doing create() when oAuthToken is already initialized would always be called twice due to this retry logic. This will result in HTTP_CONFLICT from server. Instead of checking apiGroup, we should retry only for these Non Restful resources in request url LocalSubjectAccessReview, LocalResourceAccessReview ResourceAccessReview, SelfSubjectRulesReview, SubjectRulesReview, SubjectAccessReview SelfSubjectAccessReview
Same as #3064 |
…hInterceptor Right now we have a check to retry request in OpenShiftOAuthInterceptor if it's in apiGroup authorization.openshift.io/authorization.k8s.io . This is creating problems for regular resources in this apiGroup like Role, ClusterRole, RoleBinding, ClusterRoleBinding etc. For example, doing create() when oAuthToken is already initialized would always be called twice due to this retry logic. This will result in HTTP_CONFLICT from server. Instead of checking apiGroup, we should retry only for these Non Restful resources in request url LocalSubjectAccessReview, LocalResourceAccessReview ResourceAccessReview, SelfSubjectRulesReview, SubjectRulesReview, SubjectAccessReview SelfSubjectAccessReview
…hInterceptor Right now we have a check to retry request in OpenShiftOAuthInterceptor if it's in apiGroup authorization.openshift.io/authorization.k8s.io . This is creating problems for regular resources in this apiGroup like Role, ClusterRole, RoleBinding, ClusterRoleBinding etc. For example, doing create() when oAuthToken is already initialized would always be called twice due to this retry logic. This will result in HTTP_CONFLICT from server. Instead of checking apiGroup, we should retry only for these Non Restful resources in request url LocalSubjectAccessReview, LocalResourceAccessReview ResourceAccessReview, SelfSubjectRulesReview, SubjectRulesReview, SubjectAccessReview SelfSubjectAccessReview
Since #3158 is merged. Why did we reopen this? Or Are failures still happening o master? |
…hInterceptor Right now we have a check to retry request in OpenShiftOAuthInterceptor if it's in apiGroup authorization.openshift.io/authorization.k8s.io . This is creating problems for regular resources in this apiGroup like Role, ClusterRole, RoleBinding, ClusterRoleBinding etc. For example, doing create() when oAuthToken is already initialized would always be called twice due to this retry logic. This will result in HTTP_CONFLICT from server. Instead of checking apiGroup, we should retry only for these Non Restful resources in request url LocalSubjectAccessReview, LocalResourceAccessReview ResourceAccessReview, SelfSubjectRulesReview, SubjectRulesReview, SubjectAccessReview SelfSubjectAccessReview
Right now we have a check to retry request in OpenShiftOAuthInterceptor if it's in apiGroup authorization.openshift.io/authorization.k8s.io . This is creating problems for regular resources in this apiGroup like Role, ClusterRole, RoleBinding, ClusterRoleBinding etc. For example, doing create() when oAuthToken is already initialized would always be called twice due to this retry logic. This will result in HTTP_CONFLICT from server. Instead of checking apiGroup, we should retry only for these Non Restful resources in request url LocalSubjectAccessReview, LocalResourceAccessReview ResourceAccessReview, SelfSubjectRulesReview, SubjectRulesReview, SubjectAccessReview SelfSubjectAccessReview
Right now we have a check to retry request in OpenShiftOAuthInterceptor if it's in apiGroup authorization.openshift.io/authorization.k8s.io . This is creating problems for regular resources in this apiGroup like Role, ClusterRole, RoleBinding, ClusterRoleBinding etc. For example, doing create() when oAuthToken is already initialized would always be called twice due to this retry logic. This will result in HTTP_CONFLICT from server. Instead of checking apiGroup, we should retry only for these Non Restful resources in request url LocalSubjectAccessReview, LocalResourceAccessReview ResourceAccessReview, SelfSubjectRulesReview, SubjectRulesReview, SubjectAccessReview SelfSubjectAccessReview (cherry picked from commit 0f1e7bf)
Not sure if anyone noticed, but I'm seeing consistent failures of these tests on master in
kubernetes-tests/
module:I've even tried deleting my repository and rebuilding from scratch but I'm still seeing these failures. Surprisingly, I'm not able to see these failures on CI
The text was updated successfully, but these errors were encountered: