You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 30, 2018. It is now read-only.
I am trying to do local setup for Fabric
So to start my CA server and VP0, I made a docker.compose.yml file as per the documentation given on link: http://hyperledger-fabric.readthedocs.io/en/latest/Setup/Network-setup/
My docker.compose.yml file is as follows:
membersrvc:
image: hyperledger/fabric-membersrvc
command: membersrvc
vp0:
image: hyperledger/fabric-peer
environment:
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=http://172.17.0.1:2375
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=vp0
- CORE_SECURITY_ENROLLID=test_vp0
- CORE_SECURITY_ENROLLSECRET=MwYpmSRjupbT
links:
- membersrvc
command: sh -c "sleep 5; peer node start"
This is the terminal output when i run docker compose up command:
Starting 0b17352fc95e_0b17352fc95e_0b17352fc95e_shambhavi_membersrvc_1
Starting shambhavi_vp0_1
Attaching to 0b17352fc95e_0b17352fc95e_0b17352fc95e_shambhavi_membersrvc_1, shambhavi_vp0_1
vp0_1 | 05:53:01.659 [logging] LoggingInit -> DEBU 001 Setting default logging level to DEBUG for command 'node'
vp0_1 | 05:53:01.661 [peer] func1 -> INFO 002 Auto detected peer address: 172.17.0.3:7051
vp0_1 | 05:53:01.664 [peer] func1 -> INFO 003 Auto detected peer address: 172.17.0.3:7051
vp0_1 | 05:53:01.665 [eventhub_producer] AddEventType -> DEBU 004 registering BLOCK
vp0_1 | 05:53:01.665 [eventhub_producer] AddEventType -> DEBU 005 registering CHAINCODE
vp0_1 | 05:53:01.665 [eventhub_producer] AddEventType -> DEBU 006 registering REJECTION
vp0_1 | 05:53:01.665 [eventhub_producer] AddEventType -> DEBU 007 registering REGISTER
vp0_1 | 05:53:01.665 [nodeCmd] serve -> INFO 008 Security enabled status: false
vp0_1 | 05:53:01.665 [nodeCmd] serve -> INFO 009 Privacy enabled status: false
vp0_1 | 05:53:01.667 [eventhub_producer] start -> INFO 00a event processor started
vp0_1 | 05:53:01.667 [db] open -> DEBU 00b Is db path [/var/hyperledger/production/db] empty [false]
vp0_1 | 05:53:01.852 [chaincode] NewChaincodeSupport -> INFO 00c Chaincode support using peerAddress: 172.17.0.3:7051
vp0_1 | 05:53:01.852 [chaincode] NewChaincodeSupport -> DEBU 00d Turn off keepalive(value 0)
vp0_1 | 05:53:01.852 [sysccapi] RegisterSysCC -> INFO 00e system chaincode (noop,github.com/hyperledger/fabric/bddtests/syschaincode/noop) disabled
vp0_1 | 05:53:01.853 [nodeCmd] serve -> DEBU 00f Running as validating peer - making genesis block if needed
vp0_1 | 05:53:01.855 [state] loadConfig -> INFO 010 Loading configurations...
vp0_1 | 05:53:01.856 [state] loadConfig -> INFO 011 Configurations loaded. stateImplName=[buckettree], stateImplConfigs=map[numBuckets:%!s(int=1000003) maxGroupingAtEachLevel:%!s(int=5) bucketCacheSize:%!s(int=100)], deltaHistorySize=[500]
vp0_1 | 05:53:01.856 [state] NewState -> INFO 012 Initializing state implementation [buckettree]
vp0_1 | 05:53:01.856 [buckettree] initConfig -> INFO 013 configs passed during initialization = map[string]interface {}{"numBuckets":1000003, "maxGroupingAtEachLevel":5, "bucketCacheSize":100}
vp0_1 | 05:53:01.857 [buckettree] initConfig -> INFO 014 Initializing bucket tree state implemetation with configurations &{maxGroupingAtEachLevel:5 lowestLevel:9 levelToNumBucketsMap:map[3:65 8:200001 7:40001 5:1601 0:1 9:1000003 4:321 2:13 1:3 6:8001] hashFunc:0xab4560}
vp0_1 | 05:53:01.857 [buckettree] newBucketCache -> INFO 015 Constructing bucket-cache with max bucket cache size = [100] MBs
vp0_1 | 05:53:01.859 [buckettree] loadAllBucketNodesFromDB -> INFO 016 Loaded buckets data in cache. Total buckets in DB = [0]. Total cache size:=0
vp0_1 | 05:53:01.859 [nodeCmd] serve -> DEBU 017 Running as validating peer - installing consensus
vp0_1 | 05:53:01.860 [peer] initDiscovery -> DEBU 018 Retrieved discovery list from disk: [172.17.0.4:7051]
vp0_1 | 05:53:01.861 [consensus/controller] NewConsenter -> INFO 019 Creating default consensus plugin (noops)
vp0_1 | 05:53:01.861 [consensus/noops] newNoops -> DEBU 01a Creating a NOOPS object
vp0_1 | 05:53:01.862 [consensus/noops] newNoops -> INFO 01b NOOPS consensus type = *noops.Noops
vp0_1 | 05:53:01.863 [consensus/noops] newNoops -> INFO 01c NOOPS block size = 500
vp0_1 | 05:53:01.863 [consensus/noops] newNoops -> INFO 01d NOOPS block wait = 1s
vp0_1 | 05:53:01.863 [nodeCmd] serve -> INFO 01e Starting peer with ID=name:"vp0" , network ID=dev, address=172.17.0.3:7051, rootnodes=, validator=true
vp0_1 | 05:53:01.872 [consensus/statetransfer] verifyAndRecoverBlockchain -> DEBU 01f Validating existing blockchain, highest validated block is 0, valid through 0
vp0_1 | 05:53:01.872 [consensus/statetransfer] blockThread -> INFO 020 Validated blockchain to the genesis block
vp0_1 | 05:53:01.872 [consensus/handler] 1 -> DEBU 021 Starting up message thread for consenter
vp0_1 | 05:53:01.874 [peer] ensureConnected -> DEBU 022 Starting Peer reconnect service (touch service), with period = 6s
vp0_1 | 05:53:01.874 [peer] chatWithPeer -> DEBU 023 Initiating Chat with peer address: 172.17.0.4:7051
vp0_1 | 05:53:01.875 [rest] StartOpenchainRESTServer -> INFO 024 Initializing the REST service on 0.0.0.0:7050, TLS is disabled.
vp0_1 | 05:53:04.874 [peer] chatWithPeer -> ERRO 025 Error creating connection to peer address 172.17.0.4:7051: grpc: timed out when dialing
vp0_1 | 05:53:07.874 [peer] ensureConnected -> WARN 026 Touch service indicates dropped connections, attempting to reconnect...
vp0_1 | 05:53:07.874 [peer] ensureConnected -> DEBU 027 Connected to: []
vp0_1 | 05:53:07.874 [peer] ensureConnected -> DEBU 028 Discovery knows about: [172.17.0.4:7051]
vp0_1 | 05:53:07.875 [peer] chatWithPeer -> DEBU 029 Initiating Chat with peer address: 172.17.0.4:7051
vp0_1 | 05:53:10.875 [peer] chatWithPeer -> ERRO 02a Error creating connection to peer address 172.17.0.4:7051: grpc: timed out when dialing
vp0_1 | 05:53:13.874 [peer] ensureConnected -> WARN 02b Touch service indicates dropped connections, attempting to reconnect...
vp0_1 | 05:53:13.874 [peer] ensureConnected -> DEBU 02c Connected to: []
vp0_1 | 05:53:13.874 [peer] ensureConnected -> DEBU 02d Discovery knows about: [172.17.0.4:7051]
vp0_1 | 05:53:13.874 [peer] chatWithPeer -> DEBU 02e Initiating Chat with peer address: 172.17.0.4:7051
vp0_1 | 05:53:14.875 [peer] chatWithPeer -> DEBU 02f Established Chat with peer address: 172.17.0.4:7051
vp0_1 | 05:53:14.875 [peer] handleChat -> DEBU 030 Current context deadline = 0001-01-01 00:00:00 +0000 UTC, ok = false
vp0_1 | 05:53:14.876 [peer] SendMessage -> DEBU 031 Sending message to stream of type: DISC_HELLO
vp0_1 | 05:53:14.889 [consensus/handler] HandleMessage -> DEBU 032 Did not handle message of type DISC_HELLO, passing on to next MessageHandler
vp0_1 | 05:53:14.889 [peer] HandleMessage -> DEBU 033 Handling Message of type: DISC_HELLO
vp0_1 | 05:53:14.889 [peer] beforeHello -> DEBU 034 Received DISC_HELLO, parsing out Peer identification
vp0_1 | 05:53:14.889 [peer] beforeHello -> DEBU 035 Received DISC_HELLO from endpoint=peerEndpoint:<ID:<name:"vp1" > address:"172.17.0.4:7051" type:VALIDATOR > blockchainInfo:<height:1 currentBlockHash:"F\271\335+\013\250\215\023#;?\353t>\353$?\315R\352b\270\033\202\265\014'dn\325v/\327]\304\335\330\300\362\000\313\005\001\235g\265\222\366\374\202\034IG\232\264\206@).\254\263\267\304\276" >
vp0_1 | 05:53:49.888 [peer] beforeGetPeers -> DEBU 08b Sending back DISC_PEERS
vp0_1 | 05:53:49.888 [peer] SendMessage -> DEBU 08c Sending message to stream of type: DISC_PEERS
vp0_1 | 05:53:49.889 [peer] SendMessage -> DEBU 08d Sending message to stream of type: DISC_GET_PEERS
vp0_1 | 05:53:49.889 [consensus/handler] HandleMessage -> DEBU 08e Did not handle message of type DISC_PEERS, passing on to next MessageHandler
vp0_1 | 05:53:49.889 [peer] HandleMessage -> DEBU 08f Handling Message of type: DISC_PEERS
vp0_1 | 05:53:49.889 [peer] beforePeers -> DEBU 090 Received DISC_PEERS, grabbing peers message
vp0_1 | 05:53:49.890 [peer] beforePeers -> DEBU 091 Received PeersMessage with Peers: peers:<ID:<name:"vp0" > address:"172.17.0.3:7051" type:VALIDATOR >
vp0_1 | 05:53:54.888 [consensus/handler] HandleMessage -> DEBU 092 Did not handle message of type DISC_GET_PEERS, passing on to next MessageHandler
vp0_1 | 05:53:54.888 [peer] HandleMessage -> DEBU 093 Handling Message of type: DISC_GET_PEERS
vp0_1 | 05:53:54.888 [peer] beforeGetPeers -> DEBU 094 Sending back DISC_PEERS
vp0_1 | 05:53:54.888 [peer] SendMessage -> DEBU 095 Sending message to stream of type: DISC_PEERS
vp0_1 | 05:53:54.890 [peer] SendMessage -> DEBU 096 Sending message to stream of type: DISC_GET_PEERS
vp0_1 | 05:53:54.890 [consensus/handler] HandleMessage -> DEBU 097 Did not handle message of type DISC_PEERS, passing on to next MessageHandler
vp0_1 | 05:53:54.891 [peer] HandleMessage -> DEBU 098 Handling Message of type: DISC_PEERS
vp0_1 | 05:53:54.891 [peer] beforePeers -> DEBU 099 Received DISC_PEERS, grabbing peers message
vp0_1 | 05:53:54.891 [peer] beforePeers -> DEBU 09a Received PeersMessage with Peers: peers:<ID:<name:"vp0" > address:"172.17.0.3:7051" type:VALIDATOR >
vp0_1 | 05:53:55.874 [peer] ensureConnected -> DEBU 09b Touch service indicates no dropped connections
vp0_1 | 05:53:55.874 [peer] ensureConnected -> DEBU 09c Connected to: [172.17.0.4:7051]
vp0_1 | 05:53:55.874 [peer] ensureConnected -> DEBU 09d Discovery knows about: [172.17.0.4:7051]
vp0_1 | 05:53:59.888 [consensus/handler] HandleMessage -> DEBU 09e Did not handle message of type DISC_GET_PEERS, passing on to next MessageHandler
vp0_1 | 05:53:59.889 [peer] HandleMessage -> DEBU 09f Handling Message of type: DISC_GET_PEERS
vp0_1 | 05:53:59.889 [peer] beforeGetPeers -> DEBU 0a0 Sending back DISC_PEERS
vp0_1 | 05:53:59.889 [peer] SendMessage -> DEBU 0a1 Sending message to stream of type: DISC_PEERS
vp0_1 | 05:53:59.890 [peer] SendMessage -> DEBU 0a2 Sending message to stream of type: DISC_GET_PEERS
vp0_1 | 05:53:59.890 [consensus/handler] HandleMessage -> DEBU 0a3 Did not handle message of type DISC_PEERS, passing on to next MessageHandler
vp0_1 | 05:53:59.890 [peer] HandleMessage -> DEBU 0a4 Handling Message of type: DISC_PEERS
vp0_1 | 05:53:59.890 [peer] beforePeers -> DEBU 0a5 Received DISC_PEERS, grabbing peers message
vp0_1 | 05:53:59.890 [peer] beforePeers -> DEBU 0a6 Received PeersMessage with Peers: peers:<ID:<name:"vp0" > address:"172.17.0.3:7051" type:VALIDATOR >
vp0_1 | 05:54:01.874 [peer] ensureConnected -> DEBU 0a7 Touch service indicates no dropped connections
vp0_1 | 05:54:01.874 [peer] ensureConnected -> DEBU 0a8 Connected to: [172.17.0.4:7051]
vp0_1 | 05:54:01.874 [peer] ensureConnected -> DEBU 0a9 Discovery knows about: [172.17.0.4:7051]
vp0_1 | 05:54:04.887 [consensus/handler] HandleMessage -> DEBU 0aa Did not handle message of type DISC_GET_PEERS, passing on to next MessageHandler
vp0_1 | 05:54:04.888 [peer] HandleMessage -> DEBU 0ab Handling Message of type: DISC_GET_PEERS
vp0_1 | 05:54:04.888 [peer] beforeGetPeers -> DEBU 0ac Sending back DISC_PEERS
vp0_1 | 05:54:04.888 [peer] SendMessage -> DEBU 0ad Sending message to stream of type: DISC_PEERS
vp0_1 | 05:54:04.889 [peer] SendMessage -> DEBU 0ae Sending message to stream of type: DISC_GET_PEERS
vp0_1 | 05:54:04.890 [consensus/handler] HandleMessage -> DEBU 0af Did not handle message of type DISC_PEERS, passing on to next MessageHandler
vp0_1 | 05:54:04.890 [peer] HandleMessage -> DEBU 0b0 Handling Message of type: DISC_PEERS
vp0_1 | 05:54:04.890 [peer] beforePeers -> DEBU 0b1 Received DISC_PEERS, grabbing peers message
On another terminal, i run the following command to start my VP1:
docker run --rm -it -e CORE_VM_ENDPOINT=http://172.17.0.1:2375 -e CORE_PEER_ID=vp1 -e CORE_PEER_ADDRESSAUTODETECT=true -e CORE_PEER_DISCOVERY_ROOTNODE=172.17.0.3:7051 hyperledger/fabric-peer peer node start
and give my VP0 IP as CORE_PEER_DISCOVERY_ROOTNODE in the above command
So my query is that VP0 and VP1 are communicating, but how to verify that CA server has also started as there is no specifc log for CA coming
Please help!!!!
The text was updated successfully, but these errors were encountered: