Skip to content

Latest commit

 

History

History
 
 

sample-network

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

Sample Network

Create a sample network with CRDs, fabric-operator, and the Kube API server:

  • Apply kustomization overlays to install the Operator
  • Apply kustomization overlays to construct a Fabric Network
  • Call peer CLI and channel participation SDKs to administer the network
  • Deploy Chaincode-as-a-Service smart contracts
  • Develop Gateway Client applications on a local workstation

Feedback, comments, questions, etc. at Discord : #fabric-kubernetes

sample-network

Prerequisites:

General

  • kubectl
  • jq
  • envsubst (brew install gettext on OSX)
  • k9s (recommended)
  • Fabric binaries (peer, osnadmin, etc.) will be installed into the local bin folder. Add these to your PATH:
export PATH=$PWD:$PWD/bin:$PATH

Kubernetes

If you do not have access to a Kubernetes cluster, create a local instance with KIND and Docker (+ resources to 8 CPU / 8GRAM):

network kind

For additional cluster options, see the detailed guidelines for:

DNS Domain

The operator utilizes Kubernetes Ingress resources to expose Fabric services at a common DNS wildcard domain (e.g. *.test-network.example.com). For convenience, the sample network includes an Nginx ingress controller, pre-configured with ssl-passthrough for TLS termination at the node endpoints.

For local clusters, set the ingress wildcard domain to the host loopback interface (127.0.0.1):

export TEST_NETWORK_INGRESS_DOMAIN=localho.st

For cloud-based clusters, set the ingress wildcard domain to the public DNS A record:

export TEST_NETWORK_INGRESS_DOMAIN=test-network.example.com

For additional guidelines on configuring ingress and DNS, see Considerations for Kubernetes Distributions.

Sample Network

Install the Nginx controller and Fabric CRDs:

network cluster init

Launch the operator and kustomize a network of CAs, peers, and orderers:

network up

Explore Kubernetes Pods, Deployments, Services, Ingress, etc.:

kubectl -n test-network get all

Chaincode

In the examples below, the peer binary will be used to invoke smart contracts on the org1-peer1 ledger. Set the CLI context with:

export FABRIC_CFG_PATH=${PWD}/temp/config
export CORE_PEER_LOCALMSPID=Org1MSP
export CORE_PEER_ADDRESS=test-network-org1-peer1-peer.${TEST_NETWORK_INGRESS_DOMAIN}:443
export CORE_PEER_TLS_ENABLED=true
export CORE_PEER_MSPCONFIGPATH=${PWD}/temp/enrollments/org1/users/org1admin/msp
export CORE_PEER_TLS_ROOTCERT_FILE=${PWD}/temp/channel-msp/peerOrganizations/org1/msp/tlscacerts/tlsca-signcert.pem

Chaincode as a Service

The operator is compatible with sample Chaincode-as-a-Service smart contracts and the ccaas external builder. When using the ccaas builder, the chaincode pods must be started and running in the cluster before the contract can be approved on the channel.

Clone the fabric-samples git repository:

git clone https://github.com/hyperledger/fabric-samples.git /tmp/fabric-samples

Create a channel:

network channel create

Deploy a sample contract:

network cc deploy   asset-transfer-basic basic_1.0 /tmp/fabric-samples/asset-transfer-basic/chaincode-java

network cc metadata asset-transfer-basic
network cc invoke   asset-transfer-basic '{"Args":["InitLedger"]}'
network cc query    asset-transfer-basic '{"Args":["ReadAsset","asset1"]}' | jq

Or use the native peer CLI to query the contract installed on org1 / peer1:

peer chaincode query -n asset-transfer-basic -C mychannel -c '{"Args":["org.hyperledger.fabric:GetMetadata"]}'

K8s Chaincode Builder

The operator can also be configured for use with fabric-builder-k8s, providing smooth and immediate Chaincode Right Now! deployments. With the k8s builder, the peer node will directly manage the lifecycle of the chaincode pods.

Reconstruct the network with the "k8s-fabric-peer" image:

network down

export TEST_NETWORK_PEER_IMAGE=ghcr.io/hyperledgendary/k8s-fabric-peer
export TEST_NETWORK_PEER_IMAGE_LABEL=v0.6.0

network up
network channel create

Download a "k8s" chaincode package:

curl -fsSL https://github.com/hyperledgendary/conga-nft-contract/releases/download/v0.1.1/conga-nft-contract-v0.1.1.tgz -o conga-nft-contract-v0.1.1.tgz

Install the smart contract:

peer lifecycle chaincode install conga-nft-contract-v0.1.1.tgz

export PACKAGE_ID=$(peer lifecycle chaincode calculatepackageid conga-nft-contract-v0.1.1.tgz) && echo $PACKAGE_ID

peer lifecycle \
  chaincode approveformyorg \
  --channelID     mychannel \
  --name          conga-nft-contract \
  --version       1 \
  --package-id    ${PACKAGE_ID} \
  --sequence      1 \
  --orderer       test-network-org0-orderersnode1-orderer.${TEST_NETWORK_INGRESS_DOMAIN}:443 \
  --tls --cafile  $PWD/temp/channel-msp/ordererOrganizations/org0/orderers/org0-orderersnode1/tls/signcerts/tls-cert.pem \
  --connTimeout   15s

peer lifecycle \
  chaincode commit \
  --channelID     mychannel \
  --name          conga-nft-contract \
  --version       1 \
  --sequence      1 \
  --orderer       test-network-org0-orderersnode1-orderer.${TEST_NETWORK_INGRESS_DOMAIN}:443 \
  --tls --cafile  $PWD/temp/channel-msp/ordererOrganizations/org0/orderers/org0-orderersnode1/tls/signcerts/tls-cert.pem \
  --connTimeout   15s

Inspect chaincode pods:

kubectl -n test-network describe pods -l app.kubernetes.io/created-by=fabric-builder-k8s

Query the smart contract:

peer chaincode query -n conga-nft-contract -C mychannel -c '{"Args":["org.hyperledger.fabric:GetMetadata"]}'

Teardown

Invariably, something in the recipe above will go awry. Look for additional diagnostics in network-debug.log and reset the stage with:

network down

or

network unkind

Appendix: Operations Console

Launch the Fabric Operations Console:

network console
  • open https://test-network-hlf-console-console.${TEST_NETWORK_INGRESS_DOMAIN}
  • Accept the self-signed TLS certificate
  • Log in as admin:password
  • Build a network

Appendix: Alternate k8s Runtimes

Rancher Desktop

An excellent alternative for local development is the k3s distribution bundled with Rancher Desktop.

  1. Increase cluster resources to 8 CPU / 8GRAM
  2. Select mobyd or containerd runtime
  3. Disable the Traefik ingress
  4. Restart Kubernetes

For use with mobyd / Docker container:

export TEST_NETWORK_CLUSTER_RUNTIME="k3s"
export TEST_NETWORK_STAGE_DOCKER_IMAGES="false"
export TEST_NETWORK_STORAGE_CLASS="local-path"

For use with containerd:

export TEST_NETWORK_CLUSTER_RUNTIME="k3s"
export TEST_NETWORK_CONTAINER_CLI="nerdctl"
export TEST_NETWORK_CONTAINER_NAMESPACE="--namespace k8s.io"
export TEST_NETWORK_STAGE_DOCKER_IMAGES="false"
export TEST_NETWORK_STORAGE_CLASS="local-path"

IKS

For installations at IBM Cloud, use the following configuration settings:

export TEST_NETWORK_CLUSTER_RUNTIME="k3s"
export TEST_NETWORK_COREDNS_DOMAIN_OVERRIDE="false"
export TEST_NETWORK_STAGE_DOCKER_IMAGES="false"
export TEST_NETWORK_STORAGE_CLASS="ibm-file-gold"

To determine the external IP address for the Nginx ingress controller:

  1. Run network cluster init to create the Nginx resources
  2. Determine the IP address for the Nginx EXTERNAL-IP:
INGRESS_IPADDR=$(kubectl -n ingress-nginx get svc/ingress-nginx-controller -o json | jq -r .status.loadBalancer.ingress[0].ip)
  1. Set a virtual host domain resolving *.EXTERNAL-IP.nip.io or a public DNS wildcard resolver:
export TEST_NETWORK_INGRESS_DOMAIN=$(echo $INGRESS_IPADDR | tr -s '.' '-').nip.io

For additional guidelines on configuring ingress and DNS, see Considerations for Kubernetes Distributions.

EKS

For installations at Amazon's Elastic Kubernetes Service, use the following settings:

export TEST_NETWORK_CLUSTER_RUNTIME="k3s"
export TEST_NETWORK_COREDNS_DOMAIN_OVERRIDE="false"
export TEST_NETWORK_STAGE_DOCKER_IMAGES="false"
export TEST_NETWORK_STORAGE_CLASS="gp2"

As an alternative to registering a public DNS domain with Route 54, the Dead simple wildcard DNS for any IP Address service may be used to associate the Nginx external IP with an nip.io domain.

To determine the external IP address for the ingress controller:

  1. Run network cluster init to create the Nginx resources.
  2. Wait for the ingress to come up and the hostname to propagate through public DNS (this will take a few minutes.)
  3. Determine the IP address for the Nginx EXTERNAL-IP:
INGRESS_HOSTNAME=$(kubectl -n ingress-nginx get svc/ingress-nginx-controller -o json  | jq -r .status.loadBalancer.ingress[0].hostname)
INGRESS_IPADDR=$(dig $INGRESS_HOSTNAME +short)
  1. Set a virtual host domain resolving *.EXTERNAL-IP.nip.io to the ingress IP:
export TEST_NETWORK_INGRESS_DOMAIN=$(echo $INGRESS_IPADDR | tr -s '.' '-').nip.io

For additional guidelines on configuring ingress and DNS, see Considerations for Kubernetes Distributions.

Vagrant: fabric-devenv

The fabric-devenv project will create a local development Virtual Machine, including all required prerequisites for running a KIND cluster and the sample network.

To work around an issue resolving the kube DNS hostnames in vagrant, override the internal DNS name for Fabric services with:

export TEST_NETWORK_KUBE_DNS_DOMAIN=test-network

Troubleshooting Tips

  • The network script prints output and progress to a network-debug.log file. In a second shell:
tail -f network-debug.log
  • Tail the operator logging output:
kubectl -n test-network logs -f deployment/fabric-operator
  • On OSX, there is a bug in the Golang DNS resolver (Fabric #3372 and Golang #43398), causing the Fabric binaries to occasionally stall out when querying DNS. This issue can cause osnadmin / channel join to time out, throwing an error when joining the channel. Fix this by turning a build of fabric binaries and copying the build outputs from fabric/build/bin/* --> sample-network/bin

  • Both Fabric and Kubernetes are complex systems. On occasion, things don't always work as they should, and it's impossible to enumerate all failure cases that can come up in the wild. When something in kube doesn't come up correctly, use the k9s (or another K8s navigator) to browse deployments, pods, services, and logs. Usually hitting d (describe) on a stuck resource in the test-network namespace is enough to determine the source of the error.