-
Notifications
You must be signed in to change notification settings - Fork 243
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
darwin: oc certificate errors because of golang bug #3447
Comments
I can reproduce a similar issue also with odo from https://developers.redhat.com/content-gateway/rest/mirror/pub/openshift-v4/clients/odo/v3.3.0/odo-darwin-arm64 or from brew :-/ odo from git built with go 1.19 fails with |
I rebuilt odo with the patch, it helps a bit, but is not enough.
|
openshift/oc#1207 (comment) has a workaround which fixes both issues (after rebuilding oc and updating crc's go.mod to make use of it). |
The go bug causing this issue is fixed in 1.18.10, 1.19.5 and 1.20. Hopefully we'll soon get the fixes in RHEL: |
Ran across this issue today attempting to use the binary provided by the OpenShift Console with CRC version |
Hopefully this will be fixed 'soon', fixed golang versions are arriving in RHEL, as soon as there are |
* openshift-cli: bump revision to rebuild with newer golang Fixes crc-org/crc#3447
No need to compile yourself if you use the recently built oc binary from Homebrew: |
Recent builds of the client from https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/latest-4.12/ are also fixed. |
Because of go issue golang/go#52010,
oc
on macOS no longer deals correctly with 'certificate untrusted' errors.This causes issues in 2 places in crc:
crc start
:oc login
:Issue 2. is currently worked around by using an older
oc
version (see #3375 and crc-org/snc#578), but this soon will no longer be an option, and I don't know of other workarounds which would allow users to still be able to login to the cluster. This is tracked in openshift/oc#1207 and https://bugzilla.redhat.com/show_bug.cgi?id=2097830Issue 1. is not impacting interactions with the cluster, it's only the addition of the developer and admin users as contexts in the local kubeconfig file which is failing. This is happening in code we vendor from
oc
.The text was updated successfully, but these errors were encountered: