Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(multichain-testing): stakeIca contract e2e test #9534

Merged
merged 13 commits into from
Jul 3, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions .github/workflows/multichain-e2e.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,12 @@ jobs:
node-version: 18.x
path: ./agoric-sdk

- name: yarn link
run: |
yarn link-cli ~/bin/agoric
echo "/home/runner/bin" >> $GITHUB_PATH
working-directory: ./agoric-sdk

- name: Enable Corepack
run: corepack enable
working-directory: ./agoric-sdk/multichain-testing
Expand Down Expand Up @@ -62,6 +68,10 @@ jobs:
curl --fail --retry 3 --retry-delay 10 http://localhost:8081/chains/osmosislocal || (echo "osmosislocal URL check failed")
curl --fail --retry 3 --retry-delay 10 http://localhost:8081/chains/gaialocal || (echo "gaialocal URL check failed")

- name: Override Chain Registry
run: make override-chain-registry
working-directory: ./agoric-sdk/multichain-testing

- name: Run @agoric/multichain-testing E2E Tests
run: yarn test
working-directory: ./agoric-sdk/multichain-testing
2 changes: 2 additions & 0 deletions multichain-testing/.gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
.tsimp
.yarn/*
!.yarn/patches/*
revise-chain-info*
start-*
12 changes: 10 additions & 2 deletions multichain-testing/Makefile
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# see https://github.com/cosmology-tech/starship/blob/0e18757b8393357fc66426c5ee23da4ccf760e74/examples/getting-started/Makefile

NAME = starship-getting-started
NAME = agoric-multichain-testing
FILE = config.yaml

HELM_REPO = starship
Expand Down Expand Up @@ -53,7 +53,7 @@ stop-forward:
###############################################################################
### Local Kind Setup ###
###############################################################################
KIND_CLUSTER=starship
KIND_CLUSTER=agship

.PHONY: setup-kind
setup-kind:
Expand All @@ -68,9 +68,17 @@ clean-kind:
###############################################################################
PROVISION_POOL_ADDR=agoric1megzytg65cyrgzs6fvzxgrcqvwwl7ugpt62346

# add address
add-address:
kubectl exec -i agoriclocal-genesis-0 -c validator -- agd keys add user1

fund-provision-pool:
kubectl exec -i agoriclocal-genesis-0 -c validator -- agd tx bank send faucet $(PROVISION_POOL_ADDR) 1000000000uist -y -b block

override-chain-registry:
node_modules/.bin/tsx scripts/fetch-starship-chain-info.ts && \
node_modules/.bin/tsx scripts/deploy-cli.ts src/revise-chain-info.builder.js

ADDR=agoric1ldmtatp24qlllgxmrsjzcpe20fvlkp448zcuce
COIN=1000000000uist

Expand Down
48 changes: 31 additions & 17 deletions multichain-testing/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,16 +16,10 @@ The `agoric` software revision includes the vats necessary for building and test

## Initial Setup

Ensure you have `kubectl`, `kind`, `helm`, and `yq` installed on your machine. For convenience, the following command will install dependencies:
Ensure you have `kubectl`, `kind`, `helm`, and `yq` installed on your machine.

```sh
make setup-deps
```

You will need a `kind` cluster:

```sh
make setup-kind
make setup
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm seeing an error running this command but I'm not quite sure why. Did you happen to see this error before?

Expand to see logs(it's long)
make setup
bash /Users/luqi/github/Agoric/agoric-sdk/multichain-testing/scripts/dev-setup.sh
All binaries are installed
kind create cluster --name agship
Creating cluster "agship" ...
 ✓ Ensuring node image (kindest/node:v1.30.0) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✗ Starting control-plane 🕹️
Deleted nodes: ["agship-control-plane"]
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged agship-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
Command Output: I0625 00:52:21.621598     139 initconfiguration.go:260] loading configuration from "/kind/kubeadm.conf"
W0625 00:52:21.631689     139 initconfiguration.go:348] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration
[init] Using Kubernetes version: v1.30.0
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0625 00:52:21.660190     139 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0625 00:52:21.832105     139 certs.go:483] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [agship-control-plane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 172.18.0.2 127.0.0.1]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0625 00:52:22.096770     139 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0625 00:52:22.191994     139 certs.go:483] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0625 00:52:22.277570     139 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0625 00:52:22.389020     139 certs.go:483] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [agship-control-plane localhost] and IPs [172.18.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [agship-control-plane localhost] and IPs [172.18.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0625 00:52:22.775910     139 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0625 00:52:22.918423     139 kubeconfig.go:112] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0625 00:52:23.175013     139 kubeconfig.go:112] creating kubeconfig file for super-admin.conf
[kubeconfig] Writing "super-admin.conf" kubeconfig file
I0625 00:52:23.303598     139 kubeconfig.go:112] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0625 00:52:23.472604     139 kubeconfig.go:112] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0625 00:52:23.680414     139 kubeconfig.go:112] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0625 00:52:23.816255     139 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0625 00:52:23.816525     139 manifests.go:103] [control-plane] getting StaticPodSpecs
I0625 00:52:23.817755     139 certs.go:483] validating certificate period for CA certificate
I0625 00:52:23.818030     139 manifests.go:129] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0625 00:52:23.818044     139 manifests.go:129] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0625 00:52:23.818046     139 manifests.go:129] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0625 00:52:23.818048     139 manifests.go:129] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0625 00:52:23.818050     139 manifests.go:129] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0625 00:52:23.818464     139 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I0625 00:52:23.818475     139 manifests.go:103] [control-plane] getting StaticPodSpecs
I0625 00:52:23.818583     139 manifests.go:129] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0625 00:52:23.818592     139 manifests.go:129] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0625 00:52:23.818595     139 manifests.go:129] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0625 00:52:23.818597     139 manifests.go:129] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0625 00:52:23.818598     139 manifests.go:129] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0625 00:52:23.818600     139 manifests.go:129] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0625 00:52:23.818602     139 manifests.go:129] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I0625 00:52:23.818981     139 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I0625 00:52:23.818989     139 manifests.go:103] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0625 00:52:23.819110     139 manifests.go:129] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0625 00:52:23.819465     139 manifests.go:158] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
I0625 00:52:23.819554     139 kubelet.go:68] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
I0625 00:52:24.210376     139 loader.go:395] Config loaded from file:  /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is not healthy after 4m0.001385985s

Unfortunately, an error has occurred:
	The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' returned error: Get "http://localhost:10248/healthz": context deadline exceeded


This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:110
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:115
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:128
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/[email protected]/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/[email protected]/command.go:1068
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/[email protected]/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
	k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:52
main.main
	k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	runtime/proc.go:271
runtime.goexit
	runtime/asm_amd64.s:1695
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:128
github.com/spf13/cobra.(*Command).execute
	github.com/spf13/[email protected]/command.go:940
github.com/spf13/cobra.(*Command).ExecuteC
	github.com/spf13/[email protected]/command.go:1068
github.com/spf13/cobra.(*Command).Execute
	github.com/spf13/[email protected]/command.go:992
k8s.io/kubernetes/cmd/kubeadm/app.Run
	k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:52
main.main
	k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	runtime/proc.go:271
runtime.goexit
	runtime/asm_amd64.s:1695
make: *** [setup-kind] Error 1

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't seen this before, but have a few suggestions:

  1. Ensure Kubernetes is enabled in Docker and Docker has enough resources allocated:
screenshots image image

_I'm not sure all of these resources are necessary, but this is what mine is configured to. Would be great if we can determine the minimum amount required. I suspect at least ~4 CPU and ~8GB RAM given the resource overrides in config.yaml.

  1. Try adding --verbosity 9 to the setup-kind command, for more detailed log output: kind create cluster --name agship --verbosity 9

  2. Take a look at the Starship Docs for the primary source of truth and see if there's something I might've missed documenting.

```

## Getting Started
Expand All @@ -34,21 +28,31 @@ make setup-kind
# install helm chart and start starship service
make install

# NOTE: it takes about 10-12 minutes for the above to finish setting up. Use `watch kubectl get pods` to confirm all pods are up and running before running the next command.

# expose ports on your local machine. useful for testing dapps
make port-forward

# stop the containers and port-forwarding
make stop
```

To setup finish setting up Agoric, also run:
**Wait 10-12** minutes. It takes some time for the above to finish setting up. Use `watch kubectl get pods` to confirm all pods are up and running before running the next command.

To setup finish setting up Agoric, then run:

```bash
make fund-provision-pool
make fund-provision-pool override-chain-registry
```

If you get an error like "connection refused", you need to wait longer, until all the pods are Running.

# Cleanup

```sh
# stop the containers and port-forwarding
make stop

# delete the clusters
make clean
```


## Logs

You can use the following commmands to view logs:
Expand All @@ -61,8 +65,8 @@ make tail-slog
kubectl logs agoriclocal-genesis-0 --container=validator --follow

# relayer logs
kubectl logs hermes-agoric-gaia-0 --container=validator --follow
kubectl logs hermes-agoric-gaia-0 --container=validator --follow
kubectl logs hermes-agoric-gaia-0 --container=relayer --follow
kubectl logs hermes-osmosis-gaia-0 --container=relayer --follow
```

## Agoric Smart Wallet
Expand All @@ -82,3 +86,13 @@ make fund-wallet COIN=20000000ubld ADDR=$ADDR
# provision the smart wallet
make provision-smart-wallet ADDR=$ADDR
```

# Chain Registry

These only work if you've done `make port-forward`.

http://localhost:8081/chains/agoriclocal
http://localhost:8081/chains/osmosislocal
http://localhost:8081/chains/gaialocal
http://localhost:8081/chains/agoriclocal/keys
http://localhost:8081/ibc
9 changes: 7 additions & 2 deletions multichain-testing/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -17,26 +17,30 @@
},
"packageManager": "[email protected]",
"devDependencies": {
"@endo/errors": "^1.2.2",
"@agoric/cosmic-proto": "0.4.1-dev-08f8549.0",
"@cosmjs/crypto": "^0.32.2",
"@cosmjs/proto-signing": "^0.32.2",
"@cosmjs/stargate": "^0.32.2",
"@endo/errors": "^1.2.2",
"@endo/far": "^1.1.2",
"@endo/nat": "^5.0.7",
"@endo/ses-ava": "^1.2.2",
"@types/eslint": "^8",
"@types/fs-extra": "^11",
"@types/node": "^20.11.13",
"@typescript-eslint/eslint-plugin": "^6.20.0",
"@typescript-eslint/parser": "^6.20.0",
"ava": "^6.1.3",
"eslint": "^8.56.0",
"eslint-config-prettier": "^9.1.0",
"eslint-plugin-prettier": "^5.1.3",
"execa": "^9.2.0",
"fs-extra": "^11.2.0",
"patch-package": "^8.0.0",
"prettier": "^3.2.4",
"starshipjs": "2.0.0",
"tsimp": "^2.0.10",
"tsx": "^4.15.6",
"typescript": "^5.3.3"
},
"resolutions": {
Expand All @@ -56,7 +60,8 @@
"**/*.test.ts"
],
"concurrency": 1,
"serial": true
"serial": true,
"timeout": "125s"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oddly specific :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😅 the unbonding_period is 2min

},
"prettier": {
"arrowParens": "avoid",
Expand Down
29 changes: 29 additions & 0 deletions multichain-testing/scripts/deploy-cli.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
#!/usr/bin/env tsx
import '@endo/init/debug.js';

import { execa } from 'execa';
import fse from 'fs-extra';
import childProcess from 'node:child_process';

import { makeAgdTools } from '../tools/agd-tools.js';
import { makeDeployBuilder } from '../tools/deploy.js';

async function main() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I expect we'll DRY later

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree for .js files in /tools wrt to #8963. This file, I'm surprised to hear this feedback - can you point me to something similar?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing similar but I could see it being a regular CLI command. Though I didn't have any specific ideas in mind other than to ignore my DRY/factoring sniffer.

const builder = process.argv[2];

if (!builder) {
console.error('USAGE: deploy-cli.ts <builder script>');
process.exit(1);
}

try {
const agdTools = await makeAgdTools(console.log, childProcess);
const deployBuilder = makeDeployBuilder(agdTools, fse.readJSON, execa);
await deployBuilder(builder);
} catch (err) {
console.error(err);
process.exit(1);
}
}

main();
59 changes: 59 additions & 0 deletions multichain-testing/scripts/fetch-starship-chain-info.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
#!/usr/bin/env tsx

import nodeFetch from 'node-fetch';
import fsp from 'node:fs/promises';
import prettier from 'prettier';

import { convertChainInfo } from '@agoric/orchestration/src/utils/registry.js';

import type { IBCInfo, Chains } from '@chain-registry/types';

const fetch = nodeFetch.default;

/**
* Chain registry running in Starship
*
* https://github.com/cosmology-tech/starship/blob/main/starship/proto/registry/service.proto
*
* http://localhost:8081/chains
* http://localhost:8081/chain_ids
* http://localhost:8081/ibc
*/
const BASE_URL = 'http://localhost:8081/';

const { chains }: { chains: Chains } = await fetch(`${BASE_URL}chains`).then(
r => r.json(),
);

const ibc: {
data: IBCInfo[];
} = await fetch(`${BASE_URL}ibc`).then(r => r.json());

// UNTIL https://github.com/cosmology-tech/starship/issues/494
const backmap = {
agoriclocal: 'agoric',
osmosislocal: 'osmosis',
gaialocal: 'cosmoshub',
};
for (const ibcInfo of ibc.data) {
ibcInfo.chain_1.chain_name = backmap[ibcInfo.chain_1.chain_name];
ibcInfo.chain_2.chain_name = backmap[ibcInfo.chain_2.chain_name];
for (const c of ibcInfo.channels) {
// @ts-expect-error XXX bad typedef
c.tags.preferred = c.tags.perferred;
}
}

const chainInfo = await convertChainInfo({
chains,
ibcData: ibc.data,
});

const record = JSON.stringify(chainInfo, null, 2);
const src = `/** @file Generated by fetch-starship-chain-info.ts */\nexport default /** @type {const} } */ (${record});`;
const prettySrc = await prettier.format(src, {
parser: 'babel', // 'typescript' fails to preserve parens for typecast
singleQuote: true,
trailingComma: 'all',
});
await fsp.writeFile('./starship-chain-info.js', prettySrc);
3 changes: 1 addition & 2 deletions multichain-testing/scripts/install.sh
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ HELM_REPO="starship"
HELM_CHART="starship/devnet"
HELM_REPO_URL="https://cosmology-tech.github.io/starship/"
HELM_CHART_VERSION="0.2.2"
HELM_NAME="starship-getting-started"
HELM_NAME="agoric-multichain-testing"

# check_helm function verifies the helm binary is installed
function check_helm() {
Expand Down Expand Up @@ -124,4 +124,3 @@ done
check_helm
setup_helm
install_chart

22 changes: 22 additions & 0 deletions multichain-testing/src/revise-chain-info.builder.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
/* global harden */
/// <reference types="ses" />
import { makeHelpers } from '@agoric/deploy-script-support';

import chainInfo from '../starship-chain-info.js';

/** @type {import('@agoric/deploy-script-support/src/externalTypes.js').CoreEvalBuilder} */
export const defaultProposalBuilder = async () =>
harden({
sourceSpec: '@agoric/orchestration/src/proposals/revise-chain-info.js',
getManifestCall: [
'getManifestForReviseChains',
{
chainInfo,
},
],
});

export default async (homeP, endowments) => {
const { writeCoreEval } = await makeHelpers(homeP, endowments);
await writeCoreEval('revise-chain-info', defaultProposalBuilder);
};
Loading
Loading