Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Connection refused when attempting to register user with CA #9

Closed
rgronback opened this issue Feb 28, 2021 · 11 comments · Fixed by #15
Closed

Connection refused when attempting to register user with CA #9

rgronback opened this issue Feb 28, 2021 · 11 comments · Fixed by #15

Comments

@rgronback
Copy link

All went well until this point, received this response. Any help would be appreciated.

I also noticed the CA values.yaml was completely commented-out. Intentional?

2021/02/28 07:20:30 [INFO] TLS Enabled
2021/02/28 07:20:30 [INFO] generating key: &{A:ecdsa S:256}
2021/02/28 07:20:30 [INFO] encoded CSR
Error: POST failure of request: POST https://192.168.65.3:30180/enroll
{"hosts":["mbp.lan"],"certificate_request":"-----BEGIN CERTIFICATE REQUEST-----\nMIIBQzCB6gIBADBeMQswCQYDVQQGEwJVUzEXMBUGA1UECBMOTm9ydGggQ2Fyb2xp\nbmExFDASBgNVBAoTC0h5cGVybGVkZ2VyMQ8wDQYDVQQLEwZGYWJyaWMxDzANBgNV\nBAMTBmVucm9sbDBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IABMzl5juvHE6cNI3J\ncqb51SpVhhj7IDvqARyO4ZbKI4G3bF12+uB/ablYX3W6pD6rgQ0V6HZitUa4pOPF\n9UwdI46gKjAoBgkqhkiG9w0BCQ4xGzAZMBcGA1UdEQQQMA6CDGRlYWR3b29kLmxh\nbjAKBggqhkjOPQQDAgNIADBFAiEA2pGQ462xdmt1h6X5ecLBUYmNVkPuHYDTcBfb\nBanocoUCIDNPLvA4ZqTFdcsTtr7vOhZLZyMCpsm7EgDZbaUwiFui\n-----END CERTIFICATE REQUEST-----\n","profile":"","crl_override":"","label":"","NotBefore":"0001-01-01T00:00:00Z","NotAfter":"0001-01-01T00:00:00Z","CAName":"ca"}: Post "https://192.168.65.3:30180/enroll": dial tcp 192.168.65.3:30180: connect: connection refused

@Dviejopomata
Copy link
Contributor

Hello @rgronback,

First of all sorry for the late response.

Could you share the services and the resources used? I think that the fabric-ca server is not exposed through the node port.

@rgronback
Copy link
Author

Hello!

No apologies necessary, thanks for the reply. To reproduce, I did the following starting with Kubernetes cluster reset on local Docker for Desktop on the Mac:

  • Cloned this repository
  • Created a 'standard' storage class locally using:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: standard
provisioner: docker.io/hostpath

  • Got to the step where org1-ca is created and running, all good
  • Attempted to register peer on org1-ca per next step, got connection refused as indicated above

Here is some detail, let me know if you need anything else:

rgronback@deadwood hlf-operator % kubectl hlf ca register --name=org1-ca --user=peer --secret=peerpw --type=peer
--enroll-id enroll --enroll-secret=enrollpw --mspid Org1MSP
[fabsdk/fab] 2021/03/03 03:13:15 UTC - n/a -> INFO TLS Enabled
[fabsdk/fab] 2021/03/03 03:13:15 UTC - n/a -> INFO generating key: &{A:ecdsa S:256}
[fabsdk/fab] 2021/03/03 03:13:15 UTC - logbridge.(*cLogger).Info -> INFO encoded CSR
Error: enroll failed: enroll failed: POST failure of request: POST https://192.168.65.3:32747/enroll
{"hosts":null,"certificate_request":"-----BEGIN CERTIFICATE REQUEST-----\nMIH3MIGfAgEAMBExDzANBgNVBAMTBmVucm9sbDBZMBMGByqGSM49AgEGCCqGSM49\nAwEHA0IABN0Wc7d4xkSXjgt0NDCzVEwThf683OMcjmmVKdAlFYD7DCrIW0haSJpl\nWdKrY/d7/QDZulHxPnM4j2NQzCnvVtigLDAqBgkqhkiG9w0BCQ4xHTAbMBkGA1Ud\nEQQSMBCCDmRlYWR3b29kLmxvY2FsMAoGCCqGSM49BAMCA0cAMEQCIBCnXbtMWNeB\no4ooSmAKVBXlWG+92kghVMxZaVNokgXoAiAyCYoWloGUre74L62jNDVaPyCzo88+\n2M4O8dKX3BpLKA==\n-----END CERTIFICATE REQUEST-----\n","profile":"","crl_override":"","label":"","NotBefore":"0001-01-01T00:00:00Z","NotAfter":"0001-01-01T00:00:00Z","ReturnPrecert":false,"CAName":""}: Post "https://192.168.65.3:32747/enroll": dial tcp 192.168.65.3:32747: connect: operation timed out

rgronback@deadwood hlf-operator % k logs -f org1-ca-6c487bc47f-m5vg5

fabric-ca-server start
2021/03/03 03:09:05 [INFO] Configuration file location: /var/hyperledger/fabric-ca/fabric-ca-server-config.yaml
2021/03/03 03:09:05 [INFO] Starting server in home directory: /var/hyperledger/fabric-ca
2021/03/03 03:09:05 [INFO] Server Version: 1.4.9
2021/03/03 03:09:05 [INFO] Server Levels: &{Identity:2 Affiliation:1 Certificate:1 Credential:1 RAInfo:1 Nonce:1}
2021/03/03 03:09:05 [INFO] Loading CA from /var/hyperledger/fabric-ca/fabric-ca-server-config-tls.yaml
2021/03/03 03:09:05 [INFO] The CA key and certificate files already exist
2021/03/03 03:09:05 [INFO] Key file location: /var/hyperledger/fabric-ca/msp-tls-secret/keyfile
2021/03/03 03:09:05 [INFO] Certificate file location: /var/hyperledger/fabric-ca/msp-tls-secret/certfile
2021/03/03 03:09:05 [INFO] Initialized sqlite3 database at /var/hyperledger/fabric-ca/fabric-ca-server.db
2021/03/03 03:09:05 [INFO] The issuer key was successfully stored. The public key is at: /var/hyperledger/fabric-ca/IssuerPublicKey, secret key is at: /var/hyperledger/fabric-ca/msp/keystore/IssuerSecretKey
2021/03/03 03:09:05 [INFO] Idemix issuer revocation public and secret keys were generated for CA 'tlsca'
2021/03/03 03:09:05 [INFO] The revocation key was successfully stored. The public key is at: /var/hyperledger/fabric-ca/IssuerRevocationPublicKey, private key is at: /var/hyperledger/fabric-ca/msp/keystore/IssuerRevocationPrivateKey
2021/03/03 03:09:05 [INFO] The CA key and certificate files already exist
2021/03/03 03:09:05 [INFO] Key file location: /var/hyperledger/fabric-ca/msp-secret/keyfile
2021/03/03 03:09:05 [INFO] Certificate file location: /var/hyperledger/fabric-ca/msp-secret/certfile
2021/03/03 03:09:05 [INFO] Initialized sqlite3 database at /var/hyperledger/fabric-ca/fabric-ca-server.db
2021/03/03 03:09:05 [INFO] The Idemix issuer public and secret key files already exist
2021/03/03 03:09:05 [INFO] secret key file location: /var/hyperledger/fabric-ca/msp/keystore/IssuerSecretKey
2021/03/03 03:09:05 [INFO] public key file location: /var/hyperledger/fabric-ca/IssuerPublicKey
2021/03/03 03:09:05 [INFO] The Idemix issuer revocation public and secret key files already exist
2021/03/03 03:09:05 [INFO] private key file location: /var/hyperledger/fabric-ca/msp/keystore/IssuerRevocationPrivateKey
2021/03/03 03:09:05 [INFO] public key file location: /var/hyperledger/fabric-ca/IssuerRevocationPublicKey
2021/03/03 03:09:05 [INFO] Home directory for default CA: /var/hyperledger/fabric-ca
2021/03/03 03:09:05 [INFO] Operation Server Listening on [::]:45615
2021/03/03 03:09:05 [INFO] Listening on https://0.0.0.0:7054
2021/03/03 03:09:06 [INFO] 10.1.0.1:58356 GET /cainfo 200 0 "OK"
2021/03/03 03:09:12 [INFO] 10.1.0.1:58406 GET /cainfo 200 0 "OK"

rgronback@deadwood hlf-operator % k describe service org1-ca
Name: org1-ca
Namespace: default
Labels: app=hlf-ca
app.kubernetes.io/managed-by=Helm
chart=hlf-ca-1.3.0
heritage=Helm
release=org1-ca
Annotations: meta.helm.sh/release-name: org1-ca
meta.helm.sh/release-namespace:
Selector: app=hlf-ca,release=org1-ca
Type: NodePort
IP: 10.97.143.14
LoadBalancer Ingress: localhost
Port: http 7054/TCP
TargetPort: 7054/TCP
NodePort: http 32747/TCP
Endpoints: 10.1.0.10:7054
Session Affinity: None
External Traffic Policy: Cluster
Events:

rgronback@deadwood hlf-operator % k describe pod org1-ca-6c487bc47f-m5vg5
Name: org1-ca-6c487bc47f-m5vg5
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.3
Start Time: Tue, 02 Mar 2021 22:09:03 -0500
Labels: app=hlf-ca
chart=hlf-ca-1.3.0
heritage=Helm
pod-template-hash=6c487bc47f
release=org1-ca
Annotations:
Status: Running
IP: 10.1.0.10
IPs:
IP: 10.1.0.10
Controlled By: ReplicaSet/org1-ca-6c487bc47f
Containers:
ca:
Container ID: docker://ac957d18912877730a368ce2aaa0c99015d9aed0119bd57c07a0851ae643bb8a
Image: hyperledger/fabric-ca:1.4.9
Image ID: docker-pullable://hyperledger/fabric-ca@sha256:28f50c6aa4f4642842e706d3ae6dcee181921d03bd30ab2a8b09b66e0349d92f
Port: 7054/TCP
Host Port: 0/TCP
Command:
sh
-c
mkdir -p $FABRIC_CA_HOME
cp /var/hyperledger/ca_config/ca.yaml $FABRIC_CA_HOME/fabric-ca-server-config.yaml
cp /var/hyperledger/ca_config_tls/fabric-ca-server-config.yaml $FABRIC_CA_HOME/fabric-ca-server-config-tls.yaml

  echo ">\033[0;35m fabric-ca-server start \033[0m"
  fabric-ca-server start
  
State:          Running
  Started:      Tue, 02 Mar 2021 22:09:05 -0500
Ready:          True
Restart Count:  0
Limits:
  cpu:     2
  memory:  4Gi
Requests:
  cpu:      10m
  memory:   256Mi
Liveness:   http-get https://:7054/cainfo delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness:  http-get https://:7054/cainfo delay=0s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
  org1-ca--ca  Secret     Optional: false
  org1-ca--ca  ConfigMap  Optional: false
Environment:   <none>
Mounts:
  /var/hyperledger from data (rw)
  /var/hyperledger/ca_config from ca-config (ro)
  /var/hyperledger/ca_config_tls from ca-config-tls (ro)
  /var/hyperledger/fabric-ca/msp-secret from msp-cryptomaterial (ro)
  /var/hyperledger/fabric-ca/msp-tls-secret from msp-tls-cryptomaterial (ro)
  /var/hyperledger/tls/secret from tls-secret (ro)
  /var/run/secrets/kubernetes.io/serviceaccount from default-token-rkp94 (ro)

Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: org1-ca
ReadOnly: false
tls-secret:
Type: Secret (a volume populated by a Secret)
SecretName: org1-ca--tls-cryptomaterial
Optional: false
ca-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: org1-ca--config
Optional: false
ca-config-tls:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: org1-ca--config-tls
Optional: false
msp-cryptomaterial:
Type: Secret (a volume populated by a Secret)
SecretName: org1-ca--msp-cryptomaterial
Optional: false
msp-tls-cryptomaterial:
Type: Secret (a volume populated by a Secret)
SecretName: org1-ca--msp-tls-cryptomaterial
Optional: false
default-token-rkp94:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-rkp94
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Warning FailedScheduling 11m (x4 over 13m) default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Normal Scheduled 11m default-scheduler Successfully assigned default/org1-ca-6c487bc47f-m5vg5 to docker-desktop
Normal Pulled 11m kubelet Container image "hyperledger/fabric-ca:1.4.9" already present on machine
Normal Created 11m kubelet Created container ca
Normal Started 11m kubelet Started container ca

rgronback@deadwood hlf-operator % k describe service istio-ingressgateway -n istio-system
Name: istio-ingressgateway
Namespace: istio-system
Labels: app=istio-ingressgateway
install.operator.istio.io/owning-resource=app-istiocontrolplane
install.operator.istio.io/owning-resource-namespace=istio-system
istio=ingressgateway
istio.io/rev=default
operator.istio.io/component=IngressGateways
operator.istio.io/managed=Reconcile
operator.istio.io/version=1.8.0
release=istio
Annotations:
Selector: app=istio-ingressgateway,istio=ingressgateway
Type: NodePort
IP: 10.111.147.120
LoadBalancer Ingress: localhost
Port: status-port 15021/TCP
TargetPort: 15021/TCP
NodePort: status-port 31002/TCP
Endpoints: 10.1.0.9:15021
Port: http2 80/TCP
TargetPort: 8080/TCP
NodePort: http2 31003/TCP
Endpoints: 10.1.0.9:8080
Port: https 443/TCP
TargetPort: 8443/TCP
NodePort: https 31004/TCP
Endpoints: 10.1.0.9:8443
Port: tls 15443/TCP
TargetPort: 15443/TCP
NodePort: tls 31005/TCP
Endpoints: 10.1.0.9:15443
Session Affinity: None
External Traffic Policy: Cluster
Events:

@Dviejopomata
Copy link
Contributor

Hi @rgronback , it should work in the latest release, try updating.

If the problem arises again, open the issue or contact me directly, thanks :)

@Dviejopomata Dviejopomata linked a pull request Jun 14, 2021 that will close this issue
@zhangfuli
Copy link

i have the problem when attempting to register user with CA. i get the following logg

[fabsdk/fab] 2021/09/22 08:16:30 UTC - n/a -> INFO TLS Enabled [fabsdk/fab] 2021/09/22 08:16:30 UTC - n/a -> INFO generating key: &{A:ecdsa S:256} [fabsdk/fab] 2021/09/22 08:16:30 UTC - logbridge.(*cLogger).Info -> INFO encoded CSR Error: enroll failed: enroll failed: POST failure of request: POST https://192.168.65.4:32408/enroll {"hosts":null,"certificate_request":"-----BEGIN CERTIFICATE REQUEST-----\nMIIBBjCBrQIBADARMQ8wDQYDVQQDEwZlbnJvbGwwWTATBgcqhkjOPQIBBggqhkjO\nPQMBBwNCAAQ/ZC5mCRzYB7GD3YLoS8d9laVL94vUM8TQFfz53cs8pzR950PW5/+S\nOh9Ld2aP9nV6cUjbOJR4dS12PW7j0EyioDowOAYJKoZIhvcNAQkOMSswKTAnBgNV\nHREEIDAeghx6aGFuZ2Z1bGlkZU1hY0Jvb2stUHJvLmxvY2FsMAoGCCqGSM49BAMC\nA0gAMEUCIQCxNxWD0ynmecCP+gnsHL6FfBrfwlibw079/ZlJOr4wDQIgZVYIBD6z\nxNE5yJYXfaO5p5dqhTQQn+kzLtowOWN4R8Q=\n-----END CERTIFICATE REQUEST-----\n","profile":"","crl_override":"","label":"","NotBefore":"0001-01-01T00:00:00Z","NotAfter":"0001-01-01T00:00:00Z","ReturnPrecert":false,"CAName":""}: Post https://192.168.65.4:32408/enroll: dial tcp 192.168.65.4:32408: connect: operation timed out

@Dviejopomata
Copy link
Contributor

Hi @zhangfuli

Can you confirm if you have access to this IP: 192.168.65.4?

The Kubectl plugin tries to access the URL <KUBERNETES_NODE>:<CA_NODE_PORT> so if you don't have access to the nodes in the Kubernetes cluster, or the IP is internal the Kubectl plugin won't work for you.

@zhangfuli
Copy link

My k8s cluster is also on local Docker for Desktop on the Mac. Here is some detail.

(base) zhangfulideMacBook-Pro:hlf-operator-main zhangfuli$ kubectl hlf ca register --name=org1-ca --user=peer --secret=peerpw --type=peer  --enroll-id enroll --enroll-secret=enrollpw --mspid Org1MSP
 [fabsdk/fab] 2021/09/23 03:53:30 UTC - n/a -> INFO TLS Enabled
 [fabsdk/fab] 2021/09/23 03:53:30 UTC - n/a -> INFO generating key: &{A:ecdsa S:256}
 [fabsdk/fab] 2021/09/23 03:53:30 UTC - logbridge.(*cLogger).Info -> INFO encoded CSR
Error: enroll failed: enroll failed: POST failure of request: POST https://192.168.65.4:31346/enroll
{"hosts":null,"certificate_request":"-----BEGIN CERTIFICATE REQUEST-----\nMIIBBTCBrQIBADARMQ8wDQYDVQQDEwZlbnJvbGwwWTATBgcqhkjOPQIBBggqhkjO\nPQMBBwNCAAR7lfWVMWjOwauo/YhoNdINO6qGBaEMWwxtnoRG8H6ngprcDA8RdGkx\nHfyt1049MYG/XOGfw/1MXe4zk6VWr+vCoDowOAYJKoZIhvcNAQkOMSswKTAnBgNV\nHREEIDAeghx6aGFuZ2Z1bGlkZU1hY0Jvb2stUHJvLmxvY2FsMAoGCCqGSM49BAMC\nA0cAMEQCIDhrVf+BcnJ7wbAsH8iOP13Apx0uHnR/5/PM9ScR1xmiAiAY3NGG4vCw\naig3zpCFLg1IYtooJlR8W9VqsXXFBbHyQw==\n-----END CERTIFICATE REQUEST-----\n","profile":"","crl_override":"","label":"","NotBefore":"0001-01-01T00:00:00Z","NotAfter":"0001-01-01T00:00:00Z","ReturnPrecert":false,"CAName":""}: Post https://192.168.65.4:31346/enroll: dial tcp 192.168.65.4:31346: connect: operation timed out
(base) zhangfulideMacBook-Pro:hlf-operator-main zhangfuli$ kc describe svc org1-ca
Name:                     org1-ca
Namespace:                default
Labels:                   app=hlf-ca
                          app.kubernetes.io/managed-by=Helm
                          chart=hlf-ca-1.3.0
                          heritage=Helm
                          release=org1-ca
Annotations:              meta.helm.sh/release-name: org1-ca
                          meta.helm.sh/release-namespace: 
Selector:                 app=hlf-ca,release=org1-ca
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.102.144.125
IPs:                      10.102.144.125
LoadBalancer Ingress:     localhost
Port:                     http  7054/TCP
TargetPort:               7054/TCP
NodePort:                 http  31346/TCP
Endpoints:                10.1.0.38:7054
Port:                     operations  9443/TCP
TargetPort:               9443/TCP
NodePort:                 operations  31950/TCP
Endpoints:                10.1.0.38:9443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

(base) zhangfulideMacBook-Pro:hlf-operator-main zhangfuli$ kc describe pod org1-ca-58b7748447-6lblk
Name:         org1-ca-58b7748447-6lblk
Namespace:    default
Priority:     0
Node:         docker-desktop/192.168.65.4
Start Time:   Thu, 23 Sep 2021 11:49:53 +0800
Labels:       app=hlf-ca
              chart=hlf-ca-1.3.0
              heritage=Helm
              pod-template-hash=58b7748447
              release=org1-ca
Annotations:  <none>
Status:       Running
IP:           10.1.0.38
IPs:
  IP:           10.1.0.38
Controlled By:  ReplicaSet/org1-ca-58b7748447
Containers:
  ca:
    Container ID:  docker://948c1bee08280d85bd5ac3af3ccf513820f2828f44101628aba7b789133ed91c
    Image:         hyperledger/fabric-ca:1.4.9
    Image ID:      docker-pullable://hyperledger/fabric-ca@sha256:28f50c6aa4f4642842e706d3ae6dcee181921d03bd30ab2a8b09b66e0349d92f
    Ports:         7054/TCP, 9443/TCP
    Host Ports:    0/TCP, 0/TCP
    Command:
      sh
      -c
      mkdir -p $FABRIC_CA_HOME
      cp /var/hyperledger/ca_config/ca.yaml $FABRIC_CA_HOME/fabric-ca-server-config.yaml
      cp /var/hyperledger/ca_config_tls/fabric-ca-server-config.yaml $FABRIC_CA_HOME/fabric-ca-server-config-tls.yaml
      
      echo ">\033[0;35m fabric-ca-server start \033[0m"
      fabric-ca-server start
      
    State:          Running
      Started:      Thu, 23 Sep 2021 11:50:04 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     0
      memory:  0
    Requests:
      cpu:      0
      memory:   0
    Liveness:   http-get https://:7054/cainfo delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get https://:7054/cainfo delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment Variables from:
      org1-ca--ca  Secret     Optional: false
      org1-ca--ca  ConfigMap  Optional: false
    Environment:   <none>
    Mounts:
      /var/hyperledger from data (rw)
      /var/hyperledger/ca_config from ca-config (ro)
      /var/hyperledger/ca_config_tls from ca-config-tls (ro)
      /var/hyperledger/fabric-ca/msp-secret from msp-cryptomaterial (ro)
      /var/hyperledger/fabric-ca/msp-tls-secret from msp-tls-cryptomaterial (ro)
      /var/hyperledger/tls/secret from tls-secret (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-97csk (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  org1-ca
    ReadOnly:   false
  tls-secret:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  org1-ca--tls-cryptomaterial
    Optional:    false
  ca-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      org1-ca--config
    Optional:  false
  ca-config-tls:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      org1-ca--config-tls
    Optional:  false
  msp-cryptomaterial:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  org1-ca--msp-cryptomaterial
    Optional:    false
  msp-tls-cryptomaterial:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  org1-ca--msp-tls-cryptomaterial
    Optional:    false
  kube-api-access-97csk:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  8m8s   default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
  Normal   Scheduled         8m6s   default-scheduler  Successfully assigned default/org1-ca-58b7748447-6lblk to docker-desktop
  Normal   Pulled            7m56s  kubelet            Container image "hyperledger/fabric-ca:1.4.9" already present on machine
  Normal   Created           7m56s  kubelet            Created container ca
  Normal   Started           7m56s  kubelet            Started container ca
  Warning  Unhealthy         7m55s  kubelet            Readiness probe failed: Get "https://10.1.0.38:7054/cainfo": dial tcp 10.1.0.38:7054: connect: connection refused
(base) zhangfulideMacBook-Pro:hlf-operator-main zhangfuli$ kc logs org1-ca-58b7748447-6lblk
> fabric-ca-server start 
2021/09/23 03:50:05 [INFO] Configuration file location: /var/hyperledger/fabric-ca/fabric-ca-server-config.yaml
2021/09/23 03:50:05 [INFO] Starting server in home directory: /var/hyperledger/fabric-ca
2021/09/23 03:50:05 [INFO] Server Version: 1.4.9
2021/09/23 03:50:05 [INFO] Server Levels: &{Identity:2 Affiliation:1 Certificate:1 Credential:1 RAInfo:1 Nonce:1}
2021/09/23 03:50:05 [INFO] Loading CA from /var/hyperledger/fabric-ca/fabric-ca-server-config-tls.yaml
2021/09/23 03:50:05 [INFO] The CA key and certificate files already exist
2021/09/23 03:50:05 [INFO] Key file location: /var/hyperledger/fabric-ca/msp-tls-secret/keyfile
2021/09/23 03:50:05 [INFO] Certificate file location: /var/hyperledger/fabric-ca/msp-tls-secret/certfile
2021/09/23 03:50:05 [INFO] Initialized sqlite3 database at /var/hyperledger/fabric-ca/fabric-ca-server.db
2021/09/23 03:50:05 [INFO] The issuer key was successfully stored. The public key is at: /var/hyperledger/fabric-ca/IssuerPublicKey, secret key is at: /var/hyperledger/fabric-ca/msp/keystore/IssuerSecretKey
2021/09/23 03:50:05 [INFO] Idemix issuer revocation public and secret keys were generated for CA 'tlsca'
2021/09/23 03:50:05 [INFO] The revocation key was successfully stored. The public key is at: /var/hyperledger/fabric-ca/IssuerRevocationPublicKey, private key is at: /var/hyperledger/fabric-ca/msp/keystore/IssuerRevocationPrivateKey
2021/09/23 03:50:05 [INFO] The CA key and certificate files already exist
2021/09/23 03:50:05 [INFO] Key file location: /var/hyperledger/fabric-ca/msp-secret/keyfile
2021/09/23 03:50:05 [INFO] Certificate file location: /var/hyperledger/fabric-ca/msp-secret/certfile
2021/09/23 03:50:05 [INFO] Initialized sqlite3 database at /var/hyperledger/fabric-ca/fabric-ca-server.db
2021/09/23 03:50:05 [INFO] The Idemix issuer public and secret key files already exist
2021/09/23 03:50:05 [INFO]    secret key file location: /var/hyperledger/fabric-ca/msp/keystore/IssuerSecretKey
2021/09/23 03:50:05 [INFO]    public key file location: /var/hyperledger/fabric-ca/IssuerPublicKey
2021/09/23 03:50:05 [INFO] The Idemix issuer revocation public and secret key files already exist
2021/09/23 03:50:05 [INFO]    private key file location: /var/hyperledger/fabric-ca/msp/keystore/IssuerRevocationPrivateKey
2021/09/23 03:50:05 [INFO]    public key file location: /var/hyperledger/fabric-ca/IssuerRevocationPublicKey
2021/09/23 03:50:05 [INFO] Home directory for default CA: /var/hyperledger/fabric-ca
2021/09/23 03:50:05 [INFO] Operation Server Listening on [::]:9443
2021/09/23 03:50:06 [INFO] Listening on https://0.0.0.0:7054
2021/09/23 03:50:13 [INFO] 10.1.0.1:64474 GET /cainfo 200 0 "OK"
2021/09/23 03:50:13 [INFO] 10.1.0.1:64476 GET /cainfo 200 0 "OK"

@Dviejopomata
Copy link
Contributor

Hi @zhangfuli

Can you try to access https://192.168.65.4:32408/cainfo from your MacBook? This issue's due to no connection from the machine where you're running the commands and your Kubernetes cluster.

Also, in the same terminal you're running the enrol command, try to run the following scripts:

curl https://192.168.65.4:32408/cainfo -k

If it doesn't respond immediately, we can confirm that this error is because of no connectivity to the Kubernetes cluster.

@zhangfuli
Copy link

thank you a lot. i cannot access https://192.168.65.4:32408/cainfo. but i can access localhost:32408/cainfo. i will troubleshoot problems with my cluster

@JohanIskandar
Copy link

Hello, I am using Kind for the cluster, and then when executing the following command. Did I miss something? Do I have to do something or setting on Docker desktop? Please shed some light on this. thank you

kubectl hlf ca register --name=org1-ca --user=peer
--secret=peerpw --type=peer
--enroll-id enroll --enroll-secret=enrollpw
--mspid Org1MSP

[fabsdk/fab] 2022/10/29 16:38:20 UTC - n/a -> INFO TLS Enabled
[fabsdk/fab] 2022/10/29 16:38:20 UTC - n/a -> INFO generating key: &{A:ecdsa S:256}
[fabsdk/fab] 2022/10/29 16:38:20 UTC - logbridge.(*cLogger).Info -> INFO encoded CSR
Error: enroll failed: enroll failed: POST failure of request: POST https://172.20.0.2:30933/enroll
{"hosts":null,"certificate_request":"-----BEGIN CERTIFICATE REQUEST-----\nMIIBAjCBqQIBADARMQ8wDQYDVQQDEwZlbnJvbGwwWTATBgcqhkjOPQIBBggqhkjO\nPQMBBwNCAATsxVa747wAkSHu+Ia0UOlopxnWySmyNXLmz/QzTz5WW+5xP3QW3JlY\ngXgx+I9Kl973vKRW1Jv/mCsUht7+CBMVoDYwNAYJKoZIhvcNAQkOMScwJTAjBgNV\nHREEHDAaghhKb2hhbnMtTWFjQm9vay1Qcm8ubG9jYWwwCgYIKoZIzj0EAwIDSAAw\nRQIhAO909fZEFDR27okDhTMqasg5iRQRdVsuHRBRF3bhHL/DAiA2LTXsiON7pKMh\nGmTRw/vEqbv2Z/VEbNTN13VRD3nBaA==\n-----END CERTIFICATE REQUEST-----\n","profile":"","crl_override":"","label":"","NotBefore":"0001-01-01T00:00:00Z","NotAfter":"0001-01-01T00:00:00Z","ReturnPrecert":false,"CAName":""}: Post "https://172.20.0.2:30933/enroll": dial tcp 172.20.0.2:30933: connect: operation timed out

@abhithshaji
Copy link

The error is because the port is not open.
When on a cloud platform this error can be solved by going to the firewall section in the cloud VM and creating a new firewall rule with the port . Refer Youtube for this on how to open ports in cloud platform.

@Piyushmethi09
Copy link

Launching network "tradereboot":
✅ - Creating namespace "tradereboot" ...
✅ - Provisioning volume storage ...
✅ - Creating fabric config maps ...
✅ - Initializing TLS certificate Issuers ...
✅ - Launching Fabric CAs ...
⚠️ - Enrolling bootstrap ECert CA users ...
Error: POST failure of request: POST https://org0-ca.tradetec-1137530645.ap-southeast-1.elb.amazonaws.com:443/enroll
{"hosts":["piyush"],"certificate_request":"-----BEGIN CERTIFICATE REQUEST-----\nMIIBPjCB5gIBADBgMQswCQYDVQQGEwJVUzEXMBUGA1UECBMOTm9ydGggQ2Fyb2xp\nbmExFDASBgNVBAoTC0h5cGVybGVkZ2VyMQ8wDQYDVQQLEwZGYWJyaWMxETAPBgNV\nBAMTCHJjYWFkbWluMFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEi8j8fs6vgdq9\nm1IhDTkEDkZ+MOMlTaBLGUGaPOBuRs70CP7T4KX/BRZxdFX7Sf4k3JiqlGJvpwDW\nqpWgC0AAKqAkMCIGCSqGSIb3DQEJDjEVMBMwEQYDVR0RBAowCIIGcGl5dXNoMAoG\nCCqGSM49BAMCA0cAMEQCICRhRR0U3P8H5PV2cs+Yxqm1cB7E5Pw8Rd36NPVtV9DU\nAiBavONt/MD3Lp3QWbefaGhgkjLwMt8PWmHSl/2oDQXtFA==\n-----END CERTIFICATE REQUEST-----\n","profile":"","crl_override":"","label":"","NotBefore":"0001-01-01T00:00:00Z","NotAfter":"0001-01-01T00:00:00Z","ReturnPrecert":false,"CAName":""}: Post "https://org0-ca.tradetec-1137530645.ap-southeast-1.elb.amazonaws.com:443/enroll": dial tcp: lookup org0-ca.tradetec-1137530645.ap-southeast-1.elb.amazonaws.com on 127.0.0.53:53: no such host

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants