-
Notifications
You must be signed in to change notification settings - Fork 0
OpenShift OperatorHub
Cesar Celis Hernandez edited this page Jan 24, 2023
·
23 revisions
Install MinIO Server(s) in OpenShift Local Using OperatorHub to reproduce customer issues.
- crc has to be ready in your Ubuntu Machine:
crc stop
crc delete
crc setup
crc start
Expected to see:
INFO Adding crc-admin and crc-developer contexts to kubeconfig...
Started the OpenShift cluster.
The server is accessible via web console at:
https://console-openshift-console.apps-crc.testing
Log in as administrator:
Username: kubeadmin
Password: ebWeG-2KVqI-i8dPv-vXcER
Log in as user:
Username: developer
Password: developer
Use the 'oc' command line interface:
$ eval $(crc oc-env)
$ oc login -u developer https://api.crc.testing:6443
- In OperatorHub install latest Operator version:
- Expose the operator open web page
http://localhost:9090/login
and get the token from the secret:
oc login -u kubeadmin https://api.crc.testing:6443
oc port-forward svc/console 9090 -n openshift-operators
eyJhbGciOiJSUzI1NiIsImtpZCI6Im8zTFl3dUFkVEs4TFA5U1cySW01TU5aLXNZUU95X2VEbV9PdHZLSEdWQ3cifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtb3BlcmF0b3JzIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Im1pbmlvLW9wZXJhdG9yLXRva2VuLXFka2ZoIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pbmlvLW9wZXJhdG9yIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjI0NzNlZWUtNWRkOC00OTJjLWEwNTctMTJiZWMyYjkyZThmIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om9wZW5zaGlmdC1vcGVyYXRvcnM6bWluaW8tb3BlcmF0b3IifQ.o75c-P8QfP3DBJKyqbjYFRet5QT4-rp2TzxjRmhrp0ZmszgqxHgk2eg0Vge1g8S1H15G783PAv4st5Mp99ydYF7KP7nYbvJULVxmGNg8SEtFlz_R-6GdPe37htBjfEvcJxt5UkAPYYdcaVcUhNggeMlvpjdg7_1KRVZg3ghGQ5OsEgLCnRPt-D5xnNqS4jjTfXjjJnXB0eIvXDiktfPw0ofmDLgCGUT-Nl1nA_O26TVavs67QwzHfprgKQa8vYfwVm8wtn7XrRsU9IZR5BBKkyGwFH7GnmKToENd0Fip5V2MGIROhdYAgHDE_JfA0dEf9XUXjPCdCvASve6G6j7K3QADwC1eTRgkFfRwVIFHHaZeUqAMbmGDayEPl23SB5O6DXWn2E66lPvxf-7UxoAVlrbIrqTuDikkylFx3FRkEM6eBVoAmlj2PSHqJLixx4hMjkJTriGWjgvZV6sr5RgSI7MeORjXGnS9w4ubXh63oFLAMTBYGkcTOq7gnmW2TjNifyqnXzhQ9YXiqhMPXETNNrvacvxQFXwkOyx2p56Lui8YWep9FVtbBiwO34R7hVtHbTru3TWNNGcDb0RqaHCtTYjZzeMrWUMnfApIj5gtzmkidzoEVy7maQbNBHE9-JwGhflAUCAWZPMSUCGrYUgBMqCYErsHJGdB-ToOwhn1b-o
- Apply these permissions to be able to create the namespace and get its quotas and this somehow will gain access to display the storage class in openshift:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-role-cesar-5
rules:
- apiGroups: [""]
resources:
- namespaces
- resourcequotas
- deletecollection
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: role-binding-cesar-5
namespace: openshift-operators
subjects:
- kind: ServiceAccount
name: minio-operator
namespace: openshift-operators
roleRef:
kind: ClusterRole
name: cluster-role-cesar-5
apiGroup: rbac.authorization.k8s.io
oc login -u kubeadmin https://api.crc.testing:6443
oc apply -f ~/permissions.yaml
oc adm policy add-scc-to-user privileged -n openshift-operators -z minio-operator
oc adm policy add-scc-to-user privileged -n openshift-operators -z console-sa
oc adm policy add-scc-to-user privileged -n openshift-operators -z default
oc adm policy add-scc-to-user privileged -n openshift-operators -z builder
oc adm policy add-scc-to-user privileged -n openshift-operators -z deployer
---
oc create namespace rafta
oc create serviceaccount minio-operator -n rafta
oc adm policy add-scc-to-user privileged -n rafta -z minio-operator
oc adm policy add-scc-to-user privileged -n rafta -z builder
oc adm policy add-scc-to-user privileged -n rafta -z deployer
oc adm policy add-scc-to-user privileged -n rafta -z default
- Create tenant
Name of the tenant: rafta
Namespace: rafta
Storage Class should be auto populated if not then create one and manually bound PV with PVC
Number of Servers: 1
Number of Disks: 1
Total Size: 5 GiB
Erasure Code Parity or EC: 0 but this will be set after creation, there is a bug can't be zero while creating it.
Resources not set
TLS Enabled and auto-cert ON
Audit Log Disabled
Monitoring Disabled
Access Key: STSKAzp1TAsd9TGV
Secret Key: XzMOmH6erHeXzBM8dWAsf5LlOfSRKw7k
- Then see the issue after pod is created:
ERROR Unable to initialize backend: parity validation returned an error: parity 4 should be less than or equal to 0 <- (4, 1), for pool(1st)
- To correct go to the tenant configuration http://localhost:9090/namespaces/rafta/tenants/rafta/configuration and change
EC:4
byEC:0
- MinIO Starts using TLS
https://minio.rafta.svc.cluster.local
:
Formatting 1st pool, 1 set(s), 1 drives per set.
WARNING: Host local has more than 0 drives of set. A host failure will result in data becoming unavailable.
MinIO Object Storage Server
Copyright: 2015-2023 MinIO, Inc.
License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
Version: RELEASE.2023-01-20T02-05-44Z (go1.19.4 linux/amd64)
Status: 1 Online, 0 Offline.
API: https://minio.rafta.svc.cluster.local
Console: https://10.217.0.69:9443 https://127.0.0.1:9443
Documentation: https://min.io/docs/minio/linux/index.html
Warning: The standard parity is set to 0. This can lead to data loss.
- Open Operator Pod:
I0123 22:54:43.632561 1 main.go:77] Starting MinIO Operator
I0123 22:54:44.119692 1 main.go:176] caBundle on CRD updated
I0123 22:54:44.123566 1 main-controller.go:243] Setting up event handlers
I0123 22:54:44.123883 1 leaderelection.go:248] attempting to acquire leader lease openshift-operators/minio-operator-lock...
I0123 22:54:44.177530 1 leaderelection.go:258] successfully acquired lease openshift-operators/minio-operator-lock
I0123 22:54:44.177692 1 main-controller.go:500] minio-operator-667547fd56-mzbnh: I am the leader, applying leader labels on myself
I0123 22:54:44.177817 1 main-controller.go:409] Waiting for API to start
I0123 22:54:44.177833 1 main-controller.go:390] Starting console TLS certificate setup
I0123 22:54:44.177839 1 main-controller.go:404] Console TLS is not enabled
I0123 22:54:44.177924 1 main-controller.go:381] Starting HTTP Upgrade Tenant Image server
I0123 22:54:44.352062 1 main-controller.go:352] Using Kubernetes CSR Version: v1
I0123 22:54:44.370164 1 main-controller.go:356] Starting HTTPS API server
I0123 22:54:44.370499 1 main-controller.go:412] Waiting for Upgrade Server to start
I0123 22:54:44.370507 1 main-controller.go:415] Waiting for Console TLS
I0123 22:54:44.370510 1 main-controller.go:419] Starting Tenant controller
I0123 22:54:44.370513 1 main-controller.go:422] Waiting for informer caches to sync
I0123 22:54:44.777327 1 main-controller.go:427] Starting workers
I0123 23:05:10.932989 1 upgrades.go:91] Upgrading v4.2.0
I0123 23:05:10.935863 1 upgrades.go:111] rafta has no log secret
E0123 23:05:10.938303 1 upgrades.go:138] Error deleting operator webhook secret, manual deletion is needed: secrets "operator-webhook-secret" not found
I0123 23:05:10.944956 1 upgrades.go:91] Upgrading v4.2.4
I0123 23:05:10.950734 1 status.go:240] Hit conflict issue, getting latest version of tenant to update version
I0123 23:05:10.968806 1 upgrades.go:91] Upgrading v4.2.8
I0123 23:05:10.982049 1 status.go:240] Hit conflict issue, getting latest version of tenant to update version
I0123 23:05:10.997543 1 upgrades.go:91] Upgrading v4.2.9
I0123 23:05:11.005319 1 status.go:240] Hit conflict issue, getting latest version of tenant to update version
I0123 23:05:11.135108 1 upgrades.go:91] Upgrading v4.3.0
I0123 23:05:11.139326 1 upgrades.go:285] rafta has no log secret
I0123 23:05:11.334042 1 status.go:240] Hit conflict issue, getting latest version of tenant to update version
I0123 23:05:11.740124 1 upgrades.go:91] Upgrading v4.5
I0123 23:05:11.932803 1 status.go:240] Hit conflict issue, getting latest version of tenant to update version
I0123 23:05:12.531959 1 status.go:153] Hit conflict issue, getting latest version of tenant
I0123 23:05:13.140171 1 minio.go:351] Generating private key
I0123 23:05:13.140295 1 minio.go:364] Generating CSR with CN=*.rafta-hl.rafta.svc.cluster.local
I0123 23:05:13.161692 1 csr.go:182] Start polling for certificate of csr/rafta-rafta-csr, every 5s, timeout after 20m0s
I0123 23:05:13.161803 1 event.go:285] Event(v1.ObjectReference{Kind:"Tenant", Namespace:"rafta", Name:"rafta", UID:"daea1b8d-147f-4d1a-b98a-80d3ac9097bc", APIVersion:"minio.min.io/v2", ResourceVersion:"31279", FieldPath:""}): type: 'Normal' reason: 'CSRCreated' MinIO CSR Created
I0123 23:05:18.163970 1 csr.go:208] Certificate successfully fetched, creating secret with Private key and Certificate
E0123 23:05:18.169631 1 main-controller.go:618] error syncing 'rafta/rafta': waiting for minio cert
I0123 23:05:20.976583 1 event.go:285] Event(v1.ObjectReference{Kind:"Tenant", Namespace:"rafta", Name:"rafta", UID:"daea1b8d-147f-4d1a-b98a-80d3ac9097bc", APIVersion:"minio.min.io/v2", ResourceVersion:"31323", FieldPath:""}): type: 'Normal' reason: 'SvcCreated' MinIO Service Created
I0123 23:05:20.985229 1 status.go:54] Hit conflict issue, getting latest version of tenant
I0123 23:05:21.029637 1 event.go:285] Event(v1.ObjectReference{Kind:"Tenant", Namespace:"rafta", Name:"rafta", UID:"daea1b8d-147f-4d1a-b98a-80d3ac9097bc", APIVersion:"minio.min.io/v2", ResourceVersion:"31330", FieldPath:""}): type: 'Normal' reason: 'SvcCreated' Console Service Created
I0123 23:05:21.037688 1 status.go:54] Hit conflict issue, getting latest version of tenant
I0123 23:05:21.057184 1 event.go:285] Event(v1.ObjectReference{Kind:"Tenant", Namespace:"rafta", Name:"rafta", UID:"daea1b8d-147f-4d1a-b98a-80d3ac9097bc", APIVersion:"minio.min.io/v2", ResourceVersion:"31337", FieldPath:""}): type: 'Normal' reason: 'SvcCreated' Headless Service created
I0123 23:05:21.956553 1 minio.go:222] 'rafta/operator-tls' secret not found, creating one now
E0123 23:05:22.156447 1 main-controller.go:618] error syncing 'rafta/rafta': secrets "operator-tls" not found
I0123 23:06:20.995631 1 main-controller.go:857] Detected we are updating a legacy tenant deployment
I0123 23:06:21.003116 1 main-controller.go:897] 'rafta/rafta': Deploying pool pool-0
I0123 23:06:21.033427 1 event.go:285] Event(v1.ObjectReference{Kind:"Tenant", Namespace:"rafta", Name:"rafta", UID:"daea1b8d-147f-4d1a-b98a-80d3ac9097bc", APIVersion:"minio.min.io/v2", ResourceVersion:"31596", FieldPath:""}): type: 'Normal' reason: 'PoolCreated' Tenant pool pool-0 created
I0123 23:06:31.615096 1 http_handlers.go:180] MINIO_ARGS value is /export
I0123 23:06:33.016474 1 http_handlers.go:180] MINIO_ARGS value is /export
I0123 23:06:36.372532 1 monitoring.go:129] 'rafta/rafta' Failed to get cluster health: Get "https://minio.rafta.svc.cluster.local/minio/health/cluster": dial tcp 10.217.4.14:443: i/o timeout
I0123 23:06:37.417740 1 monitoring.go:129] 'rafta/rafta' Failed to get cluster health: Get "https://minio.rafta.svc.cluster.local/minio/health/cluster": dial tcp 10.217.4.14:443: connect: connection refused
E0123 23:06:39.529784 1 main-controller.go:618] error syncing 'rafta/rafta': Put "https://minio.rafta.svc.cluster.local/minio/admin/v3/add-user?accessKey=STSKAzp1TAsd9TGV": dial tcp 10.217.4.14:443: connect: connection refused
I0123 23:06:39.529821 1 event.go:285] Event(v1.ObjectReference{Kind:"Tenant", Namespace:"rafta", Name:"rafta", UID:"daea1b8d-147f-4d1a-b98a-80d3ac9097bc", APIVersion:"minio.min.io/v2", ResourceVersion:"31600", FieldPath:""}): type: 'Warning' reason: 'UsersCreatedFailed' Users creation failed: Put "https://minio.rafta.svc.cluster.local/minio/admin/v3/add-user?accessKey=STSKAzp1TAsd9TGV": dial tcp 10.217.4.14:443: connect: connection refused
I0123 23:06:49.217616 1 http_handlers.go:180] MINIO_ARGS value is /export
I0123 23:06:59.795678 1 monitoring.go:129] 'rafta/rafta' Failed to get cluster health: Get "https://minio.rafta.svc.cluster.local/minio/health/cluster": dial tcp 10.217.4.14:443: i/o timeout
I0123 23:07:04.661852 1 monitoring.go:129] 'rafta/rafta' Failed to get cluster health: Get "https://minio.rafta.svc.cluster.local/minio/health/cluster": dial tcp 10.217.4.14:443: i/o timeout
I0123 23:07:17.243685 1 http_handlers.go:180] MINIO_ARGS value is /export
I0123 23:07:19.668442 1 monitoring.go:129] 'rafta/rafta' Failed to get cluster health: Get "https://minio.rafta.svc.cluster.local/minio/health/cluster": dial tcp 10.217.4.14:443: i/o timeout
I0123 23:07:34.674166 1 monitoring.go:129] 'rafta/rafta' Failed to get cluster health: Get "https://minio.rafta.svc.cluster.local/minio/health/cluster": dial tcp 10.217.4.14:443: i/o timeout
E0123 23:07:41.026650 1 main-controller.go:618] error syncing 'rafta/rafta': context deadline exceeded
I0123 23:07:41.026794 1 event.go:285] Event(v1.ObjectReference{Kind:"Tenant", Namespace:"rafta", Name:"rafta", UID:"daea1b8d-147f-4d1a-b98a-80d3ac9097bc", APIVersion:"minio.min.io/v2", ResourceVersion:"31620", FieldPath:""}): type: 'Warning' reason: 'UsersCreatedFailed' Users creation failed: context deadline exceeded
I0123 23:07:49.681481 1 monitoring.go:129] 'rafta/rafta' Failed to get cluster health: Get "https://minio.rafta.svc.cluster.local/minio/health/cluster": dial tcp 10.217.4.14:443: i/o timeout
I0123 23:08:09.272803 1 http_handlers.go:180] MINIO_ARGS value is /export
I0123 23:08:24.897775 1 monitoring.go:129] 'rafta/rafta' Failed to get cluster health: Get "https://minio.rafta.svc.cluster.local/minio/health/cluster": dial tcp 10.217.4.14:443: i/o timeout
I0123 23:08:39.902853 1 monitoring.go:129] 'rafta/rafta' Failed to get cluster health: Get "https://minio.rafta.svc.cluster.local/minio/health/cluster": dial tcp 10.217.4.14:443: i/o timeout
E0123 23:09:01.060401 1 main-controller.go:618] error syncing 'rafta/rafta': context deadline exceeded
I0123 23:09:01.060425 1 event.go:285] Event(v1.ObjectReference{Kind:"Tenant", Namespace:"rafta", Name:"rafta", UID:"daea1b8d-147f-4d1a-b98a-80d3ac9097bc", APIVersion:"minio.min.io/v2", ResourceVersion:"31620", FieldPath:""}): type: 'Warning' reason: 'UsersCreatedFailed' Users creation failed: context deadline exceeded
I0123 23:09:19.511045 1 http_handlers.go:180] MINIO_ARGS value is /export
I0123 23:09:23.947062 1 monitoring.go:129] 'rafta/rafta' Failed to get cluster health: Get "https://minio.rafta.svc.cluster.local/minio/health/cluster": x509: certificate signed by unknown authority
I0123 23:09:23.953377 1 monitoring.go:129] 'rafta/rafta' Failed to get cluster health: Get "https://minio.rafta.svc.cluster.local/minio/health/cluster": x509: certificate signed by unknown authority
I0123 23:09:44.793792 1 monitoring.go:129] 'rafta/rafta' Failed to get cluster health: Get "https://minio.rafta.svc.cluster.local/minio/health/cluster": x509: certificate signed by unknown authority
E0123 23:10:21.096310 1 main-controller.go:618] error syncing 'rafta/rafta': context deadline exceeded
I0123 23:10:21.096652 1 event.go:285] Event(v1.ObjectReference{Kind:"Tenant", Namespace:"rafta", Name:"rafta", UID:"daea1b8d-147f-4d1a-b98a-80d3ac9097bc", APIVersion:"minio.min.io/v2", ResourceVersion:"31620", FieldPath:""}): type: 'Warning' reason: 'UsersCreatedFailed' Users creation failed: context deadline exceeded
- Notice the log:
I0123 23:09:44.793792 1 monitoring.go:129] 'rafta/rafta' Failed to get cluster health: Get "https://minio.rafta.svc.cluster.local/minio/health/cluster": x509: certificate signed by unknown authority
- In particular:
x509: certificate signed by unknown authority
- And if you
curl
sh-4.4$ curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt https://minio.rafta.svc.cluster.local/minio/health/cluster
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
- In particular:
unable to get local issuer certificate
- So this appears on the same issue:
x509: certificate signed by unknown authority <---- Operator logs
unable to get local issuer certificate <----------- when we curl in operator pod
- Get the
private.key
&public.crt
fromrafta-tls
tenant secret:
-----BEGIN PRIVATE KEY-----
MIGHAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBG0wawIBAQQgGIDFplH+VRGPurW5
1vm2H6B805xheldVCnrLsmqwZZ6hRANCAATzvpKgpkB3ow7w86cQnrzDvp4vp5Jz
l8+e0H3yChEOQTGWB9T0pVSa1H93hJ4CNvfFFSj8nfxeqxySSO/Gxb+p
-----END PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
MIIDKTCCAhGgAwIBAgIRAK0d5CV9knYjCt+Ksl8CV38wDQYJKoZIhvcNAQELBQAw
JjEkMCIGA1UEAwwba3ViZS1jc3Itc2lnbmVyX0AxNjcwNDA5MTc0MB4XDTIzMDEy
NDE1MDgwN1oXDTI0MDEyNDE1MDgwN1owUDEVMBMGA1UEChMMc3lzdGVtOm5vZGVz
MTcwNQYDVQQDDC5zeXN0ZW06bm9kZToqLnJhZnRhLWhsLnJhZnRhLnN2Yy5jbHVz
dGVyLmxvY2FsMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE876SoKZAd6MO8POn
EJ68w76eL6eSc5fPntB98goRDkExlgfU9KVUmtR/d4SeAjb3xRUo/J38Xqsckkjv
xsW/qaOB8jCB7zAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUHAwEw
DAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSXSA0xK2oUw+b/zsImxFb2aairfjCB
mAYDVR0RBIGQMIGNgi9yYWZ0YS1wb29sLTAtMC5yYWZ0YS1obC5yYWZ0YS5zdmMu
Y2x1c3Rlci5sb2NhbIIdbWluaW8ucmFmdGEuc3ZjLmNsdXN0ZXIubG9jYWyCC21p
bmlvLnJhZnRhgg9taW5pby5yYWZ0YS5zdmOCAioughkqLnJhZnRhLnN2Yy5jbHVz
dGVyLmxvY2FsMA0GCSqGSIb3DQEBCwUAA4IBAQAmTY6KPtk+GLOzROEbSrunLQtu
vdlc68o/pML+88le1Q/9ULz+e83glp07pHPW6Q5hUDGY6qVjCecngDBDlQjdNDso
l3QXWK+H2Nx96+hQU7ioN27im4Cd5FbAPWPnawg5J48lQDNjU0F6cAYQz+O7wHKB
SZ2PV5G6ErxU5hFtzJlIb4aofYz6RBLa+HlT9s4wKdRB27ynSfkiBCiJQQo8O/De
yj4ydioYkJR5R/WjEsTNRXr4RDg05jsCmz5GRozkCILRSSG1kmyrXZ14bf075ZqH
IIGHirbGXvie1rH/aRtgk2L0FxQHJoedIOxwKiV3V0/28eWjrZuuc02UA7Uu
-----END CERTIFICATE-----
- Follow these steps:
# Steps obtained from: https://access.redhat.com/solutions/6013471
# How to add a custom CA/CA-chain to "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt"
# To add the CA/CA-chain to the pod level mounted CA file, which is /var/run/secrets/kubernetes.io/serviceaccount/ca.crt , the [custom ingress certificate configuration steps](https://docs.openshift.com/container-platform/4.7/security/certificates/replacing-default-ingress-certificate.html) can be used.
# First generate the certificate of the signer:
oc get secret csr-signer -n openshift-kube-controller-manager-operator -o template='{{ index .data "tls.crt"}}' | base64 -d > route-ca.crt
# Then, put together the above cert along with its signer in a file called ingress.pem
cat public.crt route-ca.crt > ingress.pem
# Create a secret using the ingress.pem file above and the private.key from step 1
oc create secret tls secretocuatro --cert=ingress.pem --key=private.key -n openshift-ingress
# Patch it, and wait for couple of minutes for the cert to be located at /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
oc patch ingresscontroller.operator default --type=merge -p '{"spec":{"defaultCertificate": {"name": "secretocuatro"}}}' -n openshift-ingress-operator
- Wait a bit while things get ready after the change and Operator will work and be able to communicate:
I0124 15:47:16.532242 1 main.go:77] Starting MinIO Operator
I0124 15:47:21.813485 1 main.go:176] caBundle on CRD updated
I0124 15:47:21.817762 1 main-controller.go:243] Setting up event handlers
I0124 15:47:21.817973 1 leaderelection.go:248] attempting to acquire leader lease openshift-operators/minio-operator-lock...
I0124 15:47:22.049786 1 leaderelection.go:258] successfully acquired lease openshift-operators/minio-operator-lock
I0124 15:47:22.050008 1 main-controller.go:500] minio-operator-667547fd56-brsd2: I am the leader, applying leader labels on myself
I0124 15:47:22.050293 1 main-controller.go:409] Waiting for API to start
I0124 15:47:22.050310 1 main-controller.go:390] Starting console TLS certificate setup
I0124 15:47:22.050326 1 main-controller.go:404] Console TLS is not enabled
I0124 15:47:22.050482 1 main-controller.go:381] Starting HTTP Upgrade Tenant Image server
I0124 15:47:22.306194 1 main-controller.go:352] Using Kubernetes CSR Version: v1
I0124 15:47:22.379983 1 main-controller.go:356] Starting HTTPS API server
I0124 15:47:22.380291 1 main-controller.go:412] Waiting for Upgrade Server to start
I0124 15:47:22.380328 1 main-controller.go:415] Waiting for Console TLS
I0124 15:47:22.380354 1 main-controller.go:419] Starting Tenant controller
I0124 15:47:22.380377 1 main-controller.go:422] Waiting for informer caches to sync
I0124 15:47:27.480578 1 main-controller.go:427] Starting workers
I0124 15:47:30.388063 1 status.go:120] Hit conflict issue, getting latest version of tenant
I0124 15:47:40.059127 1 status.go:120] Hit conflict issue, getting latest version of tenant
I0124 15:47:41.325972 1 status.go:120] Hit conflict issue, getting latest version of tenant
I0124 15:47:41.576943 1 status.go:178] Hit conflict issue, getting latest version of tenant
I0124 15:47:41.611632 1 event.go:285] Event(v1.ObjectReference{Kind:"Tenant", Namespace:"rafta", Name:"rafta", UID:"33aad463-a71a-4be3-b19f-0b28c55a048e", APIVersion:"minio.min.io/v2", ResourceVersion:"38597", FieldPath:""}): type: 'Normal' reason: 'UsersCreated' Users created
I0124 15:47:41.623563 1 status.go:54] Hit conflict issue, getting latest version of tenant