-
Notifications
You must be signed in to change notification settings - Fork 707
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OIDC - grpc message - Unauthenticated Authorisation required to get the Namespace 'default' due to unauthorised #6549
Comments
Thanks for following through the debugging steps - that really helps. From what you've said, you have a valid token since you can CuRL with it, which is odd, because that's exactly what the oauth2proxy service will send through with the request. Can you verify that the CuRL also works fine if you execute it from within your cluster, preferably from the kubeapps frontend nginx pod? (you may need to setup a separate pod in the same namespace, if Assuming that also works, please triple check that the token shown in the chrome network tab when logging in is the same one that you're using via curl. At this point, I'm pretty stumped from the info you've given: the k8s api server is saying the token is invalid because it doesn't have the expected audience, yet you can curl with the same, identical token and it accepts the correct audience from the token? Can you check that:
Let me know how you go. We've not tried Kubeapps with Cognito before, but assuming it's OIDC compliant, which it seems to be, there should be no issue. As above, the main thing we need to understand is whether your k8s api server is expecting the id token or an access-token with the requests. Let me know how you go! |
Thanks for the quick response...Firstly, your checklist: -
I also took your suggestion (thanks) and performed the curl both without and within the cluster. Curl WORKS in both cases. The specific curls are shown below:- curl -k -H "Authorization: Bearer $TOKEN" https://myAPISERVERUNIQUEADDRESS.gr7.eu-central-1.eks.amazonaws.com/api/v1/namespaces I am returned:- This also works if I curl to the root.. I will own up and say that I did this from within an alpine container that I dropped into the kubeapps namespace so that I could get curl installed. I did this as I am unsure how to install curl into your nginx based pod that is running Kubeapps as non root...if you know then happy to try this also. Again...appreciate the speedy response, I've had a few false starts in getting KubeApps running trying various combinations of Cloud Platform supplier and Identity Providers and this is the closest I have come to what I think will be a great demonstrator to our internal solutions teams as well as possible clients, so appreciate your persistence here. |
From the data you've provided, you've shown that after authenticating, the id-token which oauth2proxy is storing for use with all requests to the k8s API server is valid and works via curl. This shows that, from Kubeapps point of view, there is nothing wrong. So unfortunately I can only suggest that you:
That said, I don't expect to see any new information there but worth trying. I can't see or imagine why we're seeing a valid token in use by oauth2proxy, but that same token is failing when used by kubeapps. The only possibility that I can fathom is that we have a bug in Kubeapps that is causing it to try to send its service account token instead in your particular request, and that service token doesn't have the correct audience etc. required by cognito. Actually, that's something else you could try: verify that the kubeapps-apis service account (token?) is able to be used to access the k8s API server (ie. use curl after extracting the token, or even verify with |
Related, re-check your k8s apis logs: it could be that you've shown an error for a different request (the k8s-api server does check if it can use its service account token to list namespaces in some cases, but handles that failure... so there could be other relevant requests failing - not sure). Also, if you can give a screenshot of the requests tab showing the requests right after your authenticate (highlighting the one you've mentioned with the grpc error message). |
A final thought, you may get more information about exactly what request is hitting your k8s api server by temporarily running your api server with a debug log level or similar. Either that or tracing the network requests - but we really want to know what is actually reaching your k8s api server when the request is sent from Kubeapps. |
Thanks @absoludity for the suggests, from the top here's a copy of the chart values:- KubeApps Chart Valueskubeapps: Logs from Auth-ProxyI deleted the old pod, waited for new one to start so I could capture the whole log in its entirety. Taken with after I had launched KubeApps, clicked the Authenticate with OIDC, entered my credentials in AWS Cognito:- Results (I've redacted certain details and commented some of the major points I can see with ''):- *** This is the first call to a resource without authentication so I guess 403 is valid here *** Not sure why this resource requires protection but it gives a 403 *** I started the authentication here using the "Login via OIDC Provider button in KubeApps" 172.31.35.188:34316 - e0c5cd91a880e346bddf9f97525bd9db - [email protected] [2023/08/03 08:29:53] mydomain.eu-central-1.elb.amazonaws.com GET / "/" HTTP/1.1 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36" 200 206 0.003 *** Highlighing here that the HTTP Status for GetConfiguredPlugins is SUCCESS *** Highlighing here that the HTTP Status for CheckNamespaceExists is SUCCESS (This is important as looking at the network tracing from Chrome the GPRC calll fails with an authentication error) 172.31.35.188:34316 - 3dca78f84b3425e0f39b1b5b6e21f55a - [email protected] [2023/08/03 08:29:56] mydomain.eu-central-1.elb.amazonaws.com GET / "/clr-ui-dark.min.css" HTTP/1.1 "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36" 200 141581 0.013 *** Not sure why this resource requires protection but the previous 403 is now a SUCCESS Chrome Network Trace for the same periodOverviewI will start with two screen shots showing the overall flow from launch through login in AWS Cognito back to KubeApps - I've redacted where I felt necessary and have annotated where I see correlation with the Auth Log above. DetailsSome specific details for individual line items I show here the GetConfiguredPlugins as this seems to show that a GRPC call can succeed on my infrastructure:- I show below the call to CheckNamespaceExists, although in the overview it shows as a 200 success the detail shows otherwise. The following is the Payload sent to this command:- I do not see much in the logs for the kubeapps internal api service... Returns The timings don't correspond with anything I had done during this period of analysis, maybe I am misunderstand the Architecture so I have also taken the logs of the Frontend KubeApps Pod:- localcharts-kubeapps-d4bd84fb-fv2qc Defaulted container "nginx" out of: nginx, auth-proxy nginx 08:27:36.48 INFO ==> ** NGINX setup finished! ** As this POST is getting rather large I will look at your other suggestions and respond back with my findings separately. Again, thanks for your support. |
@absoludity - "Actually, that's something else you could try: verify that the kubeapps-apis service account (token?) is able to be used to access the k8s API server (ie. use curl after extracting the token, or even verify with kubectl auth can-i --as system:serviceaccount:...). See k8s docs for more info. It obviously should be able to do so, but perhaps something in your clusters' auth setup means it can't. But I feel I'm clutching at straws there."@A@a ok - So I am pushing the limits of my understanding so excuse if what I have done is not as you suggested:- kubectl get serviceaccounts -n ka-auth Gives me- NAME SECRETS AGE So, I created a token based on the localcharts-kubeapps-internal-kubeappsapis Service Account... eyJhbGciOiJSUzI1NiIsImtpZCI6ImY0NDhhMDIwM2Q0MjYwMGIzZjg3MGFkMjA3NDMzZDcyZTA2YjgyYzIifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJuc3ZjIl0sImV4cCI6MTY5MTA3NjkxNCwiaWF0IjoxNjkxMDczMzE0LCJpc3MiOiJodHRwczovL29pZGMuZWtzLmV1LWNlbnRyYWwtMS5hbWF6b25hd3MuY29tL2lkL0IzODQ4NDU3QUQyNTMxQzkyNDg1RTNBRjA2QUY2ODQwIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrYS1hdXRoIiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImxvY2FsY2hhcnRzLWt1YmVhcHBzLWludGVybmFsLWt1YmVhcHBzYXBpcyIsInVpZCI6ImRlZjI0YjUwLTNlZjYtNGJmOS04ODJiLWIxYTc3MWFlMGZjNiJ9fSwibmJmIjoxNjkxMDczMzE0LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a2EtYXV0aDpsb2NhbGNoYXJ0cy1rdWJlYXBwcy1pbnRlcm5hbC1rdWJlYXBwc2FwaXMifQ.mRlCDjSgeb1hmW_HmEAVjFAAdY43W-neCizS8oV70B7pdJzJCQ0URuIK7vbsUZmdDobs98DEf6Db_kDHtXtWVHn7hMaeD7aHqQaG0X0E2BcVvhYrofMaBcrDNt6K67fbDDhbSjmF-mVI0xDZqk3TipJCwyvtE9MNPoJniCJfxr6LekwdH19bnVI0LvGM60J6TaU-RhAt83p6AWlIMjHA9Hz9-Q0fJQTRgD0WOMME2i1Xsq8x_0cHbrHNcw06opa-F6NGhvZ5fnsJJU6J69FHzw-X1uSk06caBYOvwHRLJVz5BQjiM1FMihx-_WgqYgooQdAu6V6SIeIBeri-7Ata3w export token = eyJhbGciOiJSUzI1NiIsImtpZCI6ImY0NDhhMDIwM2Q0MjYwMGIzZjg3MGFkMjA3NDMzZDcyZTA2YjgyYzIifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjIl0sImV4cCI6MTY5MTA3NjkxNCwiaWF0IjoxNjkxMDczMzE0LCJpc3MiOiJodHRwcWwtMS5hbWF6b25hd3MuY29tL2lkL0IzODQ4NDU3QUQyNTMxQzkyNDg1RTNBRjA2QUY2ODQwIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrYS1hdXRoIiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImxvY2FsY2hhcnRzLWt1YmVhcHBzLWludGVybmFsLWt1YmVhcHBzYXBpcyIsInVpZCI6ImRlZjI0YjUwLTNlZjYtNGJmOS04ODJiLWIxYTc3MWFlMGZjNiJ9fSwibmJmIjoxNjkxMDczMzE0LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a2EtYXV0aDpsb2NhbGNoYXJ0cy1rdWJlYXBwcy1pbnRlcm5hbC1rdWJlYXBwc2FwaXMifQ.mRlCDjSgeb1hmW_HmEAVjFAAdY43W-neCizS8oV70B7pdJzJCQ0URuIK7vbsUZmdDobs98DEf6Db_kDHtXtWVHn7hMaeD7aHqQaG0X0E2BcVvhYrofMaBcrDNt6K67fbDDhbSjmF-mVI0xDZqk3TipJCwyvtE9MNPoJniCJfxr6LekwdH19bnVI0LvGM60J6TaU-RhAt83p6AWlIMjHA9Hz9-Q0fJQTRgD0WOMME2i1Xsq8x_0cHbrHNcw06opa-F6NGhvZ5fnsJJU6J69FHzw-X1uSk06caBYOvwHRLJVz5BQjiM1FMihx-_WgqYgooQdAu6V6SIeIBeri-7Ata3w curl -k -H "Authorization: Bearer $token" https://myapiserver.gr7.eu-central-1.eks.amazonaws.com/api/v1/namespaces returns me:- { So if I have this correct the internal-kubeappsapis pod has a token that is successfully authenticated against my K8s cluster. TBH - I am was not sure that this was the correct service account given that the logs for the internal-kubeappsapis didn't seem to show any activity related to my requests. So I took a punt at perhaps the default service account might be being used... My relevant pods:- Given that the calls are coming from this end pod (kubeapps-d4bd84fb-fv2qc) and I confirmed that this is using the default service account is it possible that the code is using the service account token NOT the bearer token it is being passed from the client? PS - I have deliberately modified the tokens in this post so they won't be readable. |
Thanks for all the detail. Just some notes:
which matches the following line from the kubeapps-apis:
(though I didn't try to match the others).
As it is, unless you somehow aren't setting the |
@absoludity , sorry missed the copy and paste of the authproxy.enabled flag, it was ENABLED I just missed it. I understand the rest of what you said and again - missed the correlated internal-kubeappsapis log entry for the CheckNamespaceExists - must have been going snow blind at that point ;-) "Either way, can you please paste the output for listing the pods in your kubeapps namespace?", sure find below: - You'll no doubt pick up that chartmuseum is there and that there are failing sync pods... Chart museum is going to be an option for internal hosting of our companies charts. The syncs are there because on previous runs of the Kubeapps I had configured the initial repos to include this as well as the bitnami. This is actually no longer part of the chart (I removed it while I was trying to get Auth working), unfortunately I think because this data is stored in postgresql persistent storage it is still being picked up even after a chart uninstall / install cycle. I've left here for transparency and am hoping that you agree that it is not related to the current auth issue. |
@antgamdia Not sure if you've any other thoughts or things to try here? I'm stumped based on the info we have (see above). |
I'm really keen to make this a success, if there is anything else I could provide in terms of logs or act as a guinea pig to capture more details in some way then happy to do so. Also, as you might be able to tell I am no expert so it's possible I am doing something stupid... |
Happy to jump in and try to help (not sure if I will be able to, but I'll try at least :P) Some random thoughts off the top of my head:
|
Thanks, appreciate your thoughts. If you have any questions about my set up of the Cognito pool and the configuration of the EKS cluster then please let me know. In answer to your questions....No, No and No but I will try all three and let you know the results. |
Hi, I have just tested the setup with Cognito + local cluster with Kind... and it works :S - I haven't been able to reproduce the issue: cognito.mp4Let the describe the steps I followed: Creation of the cluster
kind create cluster --config .\devel\kind-oidc.yaml
kubectl apply -f .\6549-rbac.yaml Kind configuration (kind-oidc.yaml)kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
- |
kind: ClusterConfiguration
apiServer:
extraArgs:
oidc-issuer-url: https://cognito-idp.us-east-1.amazonaws.com/us-east-1_jQ766r6X0
oidc-client-id: 7krqldp3cso0621gt8dk6dlmnj
oidc-username-claim: email
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP RBAC configuration (6549-rbac.yaml)apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubeapps-operator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: [email protected] Installing Kubeapps
helm repo add bitnami https://charts.bitnami.com/bitnami
kubectl create ns kubeapps
kubectl create ns ka-auth
helm upgrade --install kubeapps bitnami/kubeapps -n ka-auth --values .\6549.yaml I haven't redacted the Cognito credentials, it is a dummy account created just for this scenario and I will delete it soon. Kubeapps configuration (6549.yaml)apprepository:
initialRepos:
- name: bitnami-application-catalog
namespace: kubeapps
url: https://charts.bitnami.com/bitnami
authProxy:
enabled: true
clientID: 7krqldp3cso0621gt8dk6dlmnj
clientSecret: nbe7p070pn1leq261e09frb4rbk1el89h9v1ffk8316hk7n6uqm
cookieSecret: bm90LWdvb2Qtc2VjcmV0Cg==
scope: openid email
provider: oidc
extraFlags:
- --redirect-url=http://localhost:8080/oauth2/callback
- --oidc-issuer-url=https://cognito-idp.us-east-1.amazonaws.com/us-east-1_jQ766r6X0
- --proxy-prefix=/oauth2
- --scope=openid email
# - --ssl-insecure-skip-verify
# - --set-authorization-header=true
dashboard:
customLocale:
login-desc-oidc: Access to Kubeapps using AWS Cognito.
login-oidc: Login via AWS Cognito. Cognito useremail: [email protected]
password: kubeapps Checking the oidc workflow separatelyI have also tested the oidc workflow in Postman. Using the Let me know if using this configuration makes any difference. Do you see any divergence in your and mine config files? Regarding the random thoughts, I threw yesterday... they don't make much sense for your problem, but I faced some OIDC issues with managed AKS clusters (using Active Directory) in the past, so I just wanted to share it just in case. |
Cool...One by one then:- Looking at the cluster, I am using AWS EKS so for your Kind based configuration:- RBAC:-
May be this my misunderstanding but the ServiceAccount is defined within the default namespace, that is NOT the namespace I Helmed KubeApps into. In terms of the kubeapps Values file, I can see a couple of differences:- Also, you have a redirect_url in your extra_flags that I do not. In your setup you have some dashboard values. I have none of those, mine are restricted to basic resource constraint config:- I will make a change to my cluster to point it at your Cognito shortly, but before I do, in the interest of making one change at a time, do you see anything that stands out (I am concerned about the kubeapps-operator service account listed in default namespace)? |
Thanks for the quick input.
Find below the mp4 transcoded as an animated gif instead: I'm not an expert (at all) in AWS Cognito: I just created a user pool with a single hardcoded user + a client. Find below my config, but I guess it is not the issue. So, apart from some minor differences (for instance, the However, the big difference seems to be in the RBAC we have applied. Just for the record, they are: MineapiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubeapps-operator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: [email protected] YoursapiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubeapps-operator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubeapps-operator
namespace: default First and foremost: how are you linking the Cognito users with their proper RBAC? I mean, given that you are requesting the claim If you want to avoid that burden, you can switch to group claims (see below). But this would require you to create a group ( Example with group # ...
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: my-awesome-user-group
I don't fully get your point here. I mean, for accessing Kubeapps with a token (like in our getting started docs), yep, you will want a service account as the RBAC subject (to grant perms to this SA)... but if you want to use your Cognito users, you need to let k8s know about the emails/groups that have access to your cluster, no? |
Agreed - this feels like it is where the problem is. My final solution will be to map groups within cognito to specific roles via the role bindings objects in k8s. Here is an exert from the id-token which shows the group name for my user1 user:- You can see the cognito:groups array with in this case my user being a member of the KubeappsReader cognito group. At some point I was struggling to understand if I needed some group prefix (there is reference n the kubeapps documentation about requiring a prefix to tell K8s this is from the OIDC Issuer.) anyway in order to simplify I decided to try and get this working simply based on the username in the first instance then go back and tackle the group membership option.
Notice that this is not the kubeapps-operator cluster role binding. It is one that I created following the pattern in your documentation https://kubeapps.dev/docs/latest/howto/access-control/#applications. I will take a look this afternoon and try and get my RBAC setup to align with yours. I am a little perplexed however how curl works with the id token and kubeapps does not I would have expected them to behave the same. Whatever, I will come back to you. Thanks for the continued support. |
Hi @antgamdia , so I have replicated your configuration but this seems to have made no difference to the outcome:-
apiVersion: rbac.authorization.k8s.io/v1
I do have service accounts that have been created as part of the kubeapps helm chart:- I have the following role bindings created by the helm chart and a default:- I have the following cluster role bindings created by the helm chart:- I have re-tested and the failure still occurs within CheckNamespaceExists, I can still take the JWT token and issue it with curl to get Namespaces. Is there anything you can see in the RBAC I have posted above that provides insight? I have not tested against your cognito as I would need to give you my public url (which offline I would be happy to do given it's demo status). |
Another update, I then ran the same but pointing at my cognito which also works. - SUCCESS I thought it would be useful to show the differences in the client traces between the SUCCESS Local Kind based deployment an the EKS Failing deployment:- One thing that stands out is in the Failure scenario I have a duplicate Authorization Response Headers:- The request cookies are significantly different in terms of size - you have 2 totalling around 5k in size for oauth2_proxy_0.._1. My understanding is that this is due to the Mine using the Redis as session store where yours is all in the cookie. So some differences I know we have:-
Nginx Ingress I did have the buffer size increased as your document suggests:- In my production set up I am using redis, you have the full 0auth2-proxy information within the cookie. If you have any comments / observations then please let me know, my Next step will be to introduce Redis in to the Kind cluster to see if this fails. |
@absoludity @antgamdia I have succeeded in the login. As per my previous post here I went through a couple of replication steps:-
At this point on Kind I have almost my complete system replicated with the exception of the Ingress Controller. So I turned my attention back to my EKS Cluster:-
I can only assume that something in the Postgresql db was upsetting the authentication process in some way and by in effect removing this persistent data and starting from scratch it has fixed whatever was there before. It would be good to understand if there is anything held in PG that pertains to Authentication to give me some comfort (hate it when things just start working, inexplicably!) Anyway appreciate your help. |
Excellent! Really glad it's working, but yeah, really keen to know what was breaking it earlier. No, the postgresql storage is acting as a cache only (we even destroy it during upgrade to have it recreated sometimes), of chart metadata. Feel free to close the issue for now. |
It remains a mystery how this corrected 'itself'. Anyway, I will close for now. |
I have been pretty busy these days and haven't had the chance to get back to this issue. Thanks for the detailed report on each step you performed!!
100% agree... it's such a bittersweet feeling :S Anyway, happy to hear the issue is gone for now! |
Summary
I am attempting to use OIDC with my AWS EKS cluster. I have configured the Identity Provider to be an AWS Cognito instance. I have made the following Kubeapps helm value changes:-
extraFlags:
- --oidc-issuer-url=https://cognito-idp.eu-central-1.amazonaws.com/eu-central-X_YYYYYYYY
- --proxy-prefix=/oauth2
- --scope=openid email
- --ssl-insecure-skip-verify
- --set-authorization-header=true
- --session-store-type=redis
- --redis-connection-url=redis://localcharts-redis-master:6379/0
I have a user created in Cognito (user1@mydomain) I have also configured that user as being email verified.
ClientId and Client Secrets have been defined in the Kubeapps helm chart so as far as I am concerned I am good to go for a logon to KubeApps.
When I browse to my KubeApps site I get the "Welcome to KubeApps" and because I have it configured this way it is showing me the Login via OIDC Provider button. I click the button and am directed to my Cognito instance where I logon as my user ([email protected]).
I am brought back to KubeApps where I am returned to the Welcome to KubeApps screen - i.e. no dashboard.
I have followed troubleshooting section of your website and can confirm that during Chrome Tracing I can see the JWT bearer token, the username and the group memberships are being returned. The Issuer and the Audience are as I would expect within the token. Within the trace the first point of failure I see is a second call to the CheckNamespace Exists (the first call happens on application load before I have authenticated and in that case I get what I assume is the correct 403 response. However, on the second call although the HTTP Status is 200 the header detail shows the following error:-
grpc message - rpc error = Unauthenticated Authorisation required to get the Namespace 'default' due to unauthorised
Cool I think...so I check the K8s API logs...
E0801 12:13:55.600504 11 webhook.go:154] Failed to make webhook authenticator request: unknown
E0801 12:13:55.600568 11 authentication.go:70] "Unable to authenticate the request" err="[invalid bearer token, oidc: verify token: oidc: expected audience "3cb09302hdv909hv0nh3f2e72a" got [], unknown]"
Given the above I wondered whether the session state (I am using redis) was not working and therefore no session cookie with the tokens was been found server side (even though we can see them for debugging purposes client side). However, I do see a key being created within Redis (albeit I lack the understanding of how to decrypt it manually to check its contents.
As I mentioned, I have followed the various steps in the https://kubeapps.dev/docs/latest/howto/oidc/oauth2oidc-debugging/ article and when I grab the token from my Chrome network trace and use it within a curl to the K8s API Server I am able to view all resources (to get over this current hurdle I have a cluster-admin role binding to this particular user, which when using curl I can add and remove the binding and see 403 / 200 responses as you would expect.
Your time would be appreciated in helping me understand where I am slipping up here. Obviously any logs or information I can provide with your guidance I'll happily grab.
The text was updated successfully, but these errors were encountered: