-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[0011] zkApp-CLI enhancements for local lightnet management. #35
Conversation
|
||
#### Stop lightnet network | ||
|
||
- **Command**: `zk lightnet stop` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my understanding, docker containers will leave behind logs on the user's system. This could introduce a ton of bloat where logs are being kept from each run. Do you think we should wipe the logs out when tearing down the network or have some subcommand to do some house cleaning? Or should we just let the users worry about that, wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great question @MartinMinkov. I think we should clean the logs as a default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @MartinMinkov,
docker containers will leave behind logs on the user's system
Which logs you're referring here to?
As it is stated in description to stop
command:
In future we might want to extend this command with additional options to preserve the network state between containers lifecycle.
Which essentially means that for this initial version we will wipe everything (containers will start with --rm
option and we will clean up other leftovers is any). RFC is updated to clearly state about the cleaning up procedure.
- --proof-level: `none` or `full` (default: `none`) | ||
- --archive-node: `true` or `false` (default: `true`) | ||
- --mina-branch: `rampup`, `berkeley` or `develop` (default: `rampup`) | ||
- --dune-profile: `lightnet` or `devnet` (default: `lightnet`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When would someone use devnet
for the dune profile? Is this something we need to expose? Are there invalid mina-branch
and dune-profile
combinations to look out for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When would someone use devnet for the dune profile?
In case when one will need the network properties closer to those in use for public networks. Use multi-node
as network mode, full
as proof level, devnet
as Dune profile and you will get container that runs small network of predefined participants with constants and properties closer to the real world.
Is this something we need to expose?
Yes, for fine tuning I think we need it.
Are there invalid mina-branch and dune-profile combinations to look out for?
No, every major Mina branch that we publish docker image for is built twice: with lightnet
and develop
Dune profiles. This results in 2 images per Mina branch. Image tags anatomy is also described in Docker Hub repo description.
Do you think we need to document something in more details?
- **Command**: `zk lightnet start <options>` | ||
- **Function**: Checks the env., starts the local lightnet Docker container with reasonable default configuration options and waits until the network sync. Repeating the command while network is up and running should not start a new container, but rather check the status of the existing one. | ||
- **Configuration options**: | ||
- --mode: `single-node` or `multi-node` (default: `single-node`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How many nodes will be multi-node? Will it be configurable or static?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Docker Hub repo description has Multi-Node ports reference
section which gives the network topology overview by participants exposing their ports. Maybe we need to describe it in more details on Docker Hub page.
Yes, as of now it is static/predefined network topology enough to keep network alive. Reason to have it I've described in discussion above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also updated the Docker Hub repo description mentioned above (link) in order to provide better services topology overview.
|
||
#### Handle logs | ||
|
||
- **Command**: `zk lightnet logs <sub-command>` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think it would be helpful to specify logs of specific nodes? Maybe only fetching logs from the single node or archive node? Do you see any value in that? E.g. maybe we assign IDs to each node and can do zk logs node-id1
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the majority will start default networks (which is single node network
) and because we operate on user's local machines there probably won't be much benefits of being that specific. For both modes I was thinking to just compress all logs available, save the archive and output the created file path (when user decides to save logs).
Viewing logs on another hand should have mentioned option to see logs of specific network participant.
Maybe we should not even expose mode
and always run networks in single-node
mode?
- **Command**: `zk lightnet logs <sub-command>` | ||
- **Sub-commands**: | ||
- **view**: `less` or `tail` the log files (Mina Daemon or Archive Node). OS agnostic. | ||
- **save**: Saves the log files to default location and outputs the resulting path. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What will the default location be?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have the same question. We could reuse the similar location we store cached feepayer information. For example:
<os-specific-homedir>/.cache/zkapp-cli/lightnet/logs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I was thinking about what @ymekuria suggested.
Another question I have is related to fault tolerance. How should we plan around node failures? If a node fails, is there any restart policies that happen automatically? If not, could we detect node errors and preemptively fix them? Should we notify users iof these node failures right away? What strategy would you like to see? |
#### Help | ||
|
||
- **Command**: `zk lightnet` | ||
- **Output**: Help message with description of available sub-commands and their options. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this pattern to list the help message with sub commands after entering zk lightnet
. We should also add zk lightnet
and the subcommands to the help message a user can invoke when entering 'zk' or 'zk --help'. It can look something like this.
Usage: zk [options]
Commands:
zk project [name] Create a new project [aliases: proj, p]
zk file [name] Create a file and generate the corresponding test file
[aliases: f]
zk config Add a new deploy alias
zk deploy [alias] Deploy or redeploy a zkApp
zk example [name] Create an example project [aliases: e]
zk system Show system info [aliases: sys, s]
zk lightnet [sub-command]
start <options> Start a light network [aliases: tbd]
List start options
stop <options> Stop a light network [aliases: tbd]
List stop options
logs <sub-command> Display network logs [aliases: tbd]
List logs sub-commands
explorer Start a explorer [aliases: tbd]
Options:
-h, --help Show help [boolean]
-v, --version Show version number [boolean]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @ymekuria, that was the idea.
- Mina Daemon GraphQL endpoint for communication: [http://localhost:8080/graphql](http://localhost:8080/graphql) | ||
- PostgreSQL connection string: `postgresql://postgres:postgres@localhost:5432/archive`; | ||
- Mina Daemon and Archive Node logs path; | ||
- [o1js configuration](https://github.com/o1-labs/o1js/blob/ccf50e0b58190d9700c6dda4fafee3e62e270131/src/examples/zkapps/hello_world/run_live.ts#L18) code snippet. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are great outputs! We need to supplement this with docs walking developers through how to connect and use the light network in their workflow. We should have some text output in the CLI with all the information above, next immediate steps, and maybe a link to docs when they are completed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree!
#### Stop lightnet network | ||
|
||
- **Command**: `zk lightnet stop` | ||
- In future we might want to extend this command with additional options to preserve the network state between containers lifecycle. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 I think this can be a useful feature but we shouldn't focus on it for the first version.
#### Start lightnet network | ||
|
||
- **Command**: `zk lightnet start <options>` | ||
- **Function**: Checks the env., starts the local lightnet Docker container with reasonable default configuration options and waits until the network sync. Repeating the command while network is up and running should not start a new container, but rather check the status of the existing one. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What will be checked in the env. ? Is the env. configured by the options entered in zk lightnet start <options>
or from somewhere else?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here by "checks the env." I was only thinking about:
- check that docker engine installed and accessible by user
- check internet connectivity to make it possible download the image
- check that system has enough space to unpack the image and start the container
- **Command**: `zk lightnet logs <sub-command>` | ||
- **Sub-commands**: | ||
- **view**: `less` or `tail` the log files (Mina Daemon or Archive Node). OS agnostic. | ||
- **save**: Saves the log files to default location and outputs the resulting path. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have the same question. We could reuse the similar location we store cached feepayer information. For example:
<os-specific-homedir>/.cache/zkapp-cli/lightnet/logs
- Testing all lightnet commands and their possible variations. | ||
- Addressing potential error scenarios and recovery mechanisms. | ||
- **Testing requirements**: | ||
- Test in various local setups to simulate different development environments. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the future it will be useful to test interacting with the light network from a browser UI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed that this is a future requirement but I would love to see this happen at some point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be fair I need to say that the light network
is already in use for zkapp-cli
and o1js
E2E tests. This RFC and implementation is essentially the wrapper around https://hub.docker.com/r/o1labs/mina-local-network and other tools that is already successfully in use and works really good (including local env. and CI).
Next o1js and zkapp-cli tests will continue utilize same toolset as it is right now (check CI jobs of repos mentioned).
Should we add more capabilities to zkapp-cli
and allow users interact with deployed zkApp via UI? No problem, since we already have all the tools.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great and will be very useful for developers. I had some comments, questions, and suggestions.
|
||
#### Launch the blockchain explorer | ||
|
||
- **Command**: `zk lightnet explorer` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@shimkiv just confirming that this functionality will be added once everything else here has been released, and after archive node API. Correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct.
@MartinMinkov absolutely makes sense but I'd extract it as task for later. For now I'd just add |
Co-authored-by: Barrie Byron <[email protected]>
Implementation is ready, here is an Epic. |
@bkase can you please approve it? |
Closes https://github.com/o1-labs/quality-and-testing/issues/139