diff --git a/README.md b/README.md index 05e50e9e2b..d2944efcb8 100644 --- a/README.md +++ b/README.md @@ -78,7 +78,7 @@ Starwhale is an MLOps platform. It provides **Instance**, **Project**, **Runtime - 🥑 **Task**: Operation entity. Tasks are in some specific steps. - **Scenarios**: Starwhale provides the best practice and out-of-the-box for different ML/DL scenarios. - - 🚝 **Model Training(TBD)**: Use Starwhale Python SDK to record experiment meta, metric, log, and artifact. + - 🚝 **Model Training(WIP)**: Use Starwhale Python SDK to record experiment meta, metric, log, and artifact. - 🛥️ **Model Evaluation**: `PipelineHandler` and some report decorators can give you complete, helpful, and user-friendly evaluation reports with only a few lines of codes. - 🛫 **Model Serving(TBD)**: Starwhale Model can be deployed as a web service or stream service in production with deployment capability, observability, and scalability. Data scientists do not need to write ML/DL irrelevant codes. @@ -103,171 +103,170 @@ Starwhale is an MLOps platform. It provides **Instance**, **Project**, **Runtime ```bash git clone https://github.com/star-whale/starwhale.git - ``` - - If [git-lfs](https://git-lfs.github.com/) has not been previously installed in the local environment(the command is `git lfs install`), you need to download the trained model file. - - ```bash - wget https://media.githubusercontent.com/media/star-whale/starwhale/main/example/mnist/models/mnist_cnn.pt -O example/mnist/models/mnist_cnn.pt + cd starwhale ``` - ☕ **STEP3**: Building a runtime + > When you first build runtime, creating an isolated python environment and downloading python dependencies will take a lot of time. The command execution time is related to the network environment of the machine and the number of packages in the runtime.yaml. Using the befitting pypi mirror and cache config in the `~/.pip/pip.conf` file is a recommended practice. + > + > For users in the mainland of China, the following conf file is an option: + > + > ```conf + > [global] + > cache-dir = ~/.cache/pip + > index-url = https://mirrors.aliyun.com/pypi/simple/ + > extra-index-url = https://pypi.doubanio.com/simple + > ``` + ```bash - cd example/runtime/pytorch - swcli runtime build . + swcli runtime build example/runtime/pytorch swcli runtime list swcli runtime info pytorch/version/latest + swcli runtime restore pytorch/version/latest ``` - 🍞 **STEP4**: Building a model - - Enter `example/mnist` directory: - - ```bash - cd ../../mnist - ``` - - - Write some code with Starwhale Python SDK. Complete code is [here](https://github.com/star-whale/starwhale/blob/main/example/mnist/mnist/evaluator.py). - - ```python - import typing as t - import torch - from starwhale import Image, PipelineHandler, PPLResultIterator, multi_classification - - class MNISTInference(PipelineHandler): - def __init__(self) -> None: - super().__init__() - self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - self.model = self._load_model(self.device) - - def ppl(self, img: Image, **kw: t.Any) -> t.Tuple[t.List[int], t.List[float]]: - data_tensor = self._pre(img) - output = self.model(data_tensor) - return self._post(output) - - @multi_classification( - confusion_matrix_normalize="all", - show_hamming_loss=True, - show_cohen_kappa_score=True, - show_roc_auc=True, - all_labels=[i for i in range(0, 10)], - ) - def cmp( - self, ppl_result: PPLResultIterator - ) -> t.Tuple[t.List[int], t.List[int], t.List[t.List[float]]]: - result, label, pr = [], [], [] - for _data in ppl_result: - label.append(_data["annotations"]["label"]) - result.extend(_data["result"][0]) - pr.extend(_data["result"][1]) - return label, result, pr - - def _pre(self, input:bytes): - """write some mnist preprocessing code""" - - def _post(self, input:bytes): - """write some mnist post-processing code""" - - def _load_model(): - """load your pre trained model""" - ``` - - - Define `model.yaml`. - - ```yaml - name: mnist - model: - - models/mnist_cnn.pt - config: - - config/hyperparam.json - run: - handler: mnist.evaluator:MNISTInference - ``` + - Download pre-trained model file: + + ```bash + cd example/mnist + make download-model + # For users in the mainland of China, please add `CN=1` environment for make command: + # CN=1 make download-model + cd - + ``` + + - [Code Example]Write some code with Starwhale Python SDK. Complete code is [here](https://github.com/star-whale/starwhale/blob/main/example/mnist/mnist/evaluator.py). + + ```python + import typing as t + import torch + from starwhale import Image, PipelineHandler, PPLResultIterator, multi_classification + + class MNISTInference(PipelineHandler): + def __init__(self) -> None: + super().__init__() + self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") + self.model = self._load_model(self.device) + + def ppl(self, data: t.Dict[str, t.Any], **kw: t.Any) -> t.Tuple[float, t.List[float]]: + data_tensor = self._pre(data["img"]) + output = self.model(data_tensor) + return self._post(output) + + @multi_classification( + confusion_matrix_normalize="all", + show_hamming_loss=True, + show_cohen_kappa_score=True, + show_roc_auc=True, + all_labels=[i for i in range(0, 10)], + ) + def cmp( + self, ppl_result: PPLResultIterator + ) -> t.Tuple[t.List[int], t.List[int], t.List[t.List[float]]]: + result, label, pr = [], [], [] + for _data in ppl_result: + label.append(_data["ds_data"]["label"]) + result.append(_data["result"][0]) + pr.append(_data["result"][1]) + return label, result, pr + + def _pre(self, input:bytes): + """write some mnist preprocessing code""" + + def _post(self, input:bytes): + """write some mnist post-processing code""" + + def _load_model(): + """load your pre trained model""" + ``` + + - [Code Example]Define `model.yaml`. + + ```yaml + name: mnist + model: + - models/mnist_cnn.pt + config: + - config/hyperparam.json + run: + handler: mnist.evaluator:MNISTInference + ``` - Run one command to build the model. - ```bash - swcli model build . + ```bash + swcli model build example/mnist --runtime pytorch/version/latest + swcli model list swcli model info mnist/version/latest - ``` + ``` - 🍺 **STEP5**: Building a dataset - Download MNIST RAW data files. - ```bash - mkdir -p data && cd data - wget http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz - wget http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz - gzip -d *.gz - cd .. - ls -lah data/* - ``` + ```bash + cd example/mnist + make download-data + # For users in the mainland of China, please add `CN=1` environment for make command: + # CN=1 make download-data + cd - + ``` - - Write some code with Starwhale Python SDK. Full code is [here](https://github.com/star-whale/starwhale/blob/main/example/mnist/mnist/dataset.py). + - [Code Example]Write some code with Starwhale Python SDK. Full code is [here](https://github.com/star-whale/starwhale/blob/main/example/mnist/mnist/dataset.py). - ```python + ```python import struct - import typing as t from pathlib import Path - from starwhale import BuildExecutor - - class DatasetProcessExecutor(SWDSBinBuildExecutor): - def iter_item(self) -> t.Generator[t.Tuple[t.Any, t.Any], None, None]: - root_dir = Path(__file__).parent.parent / "data" - - with (root_dir / "t10k-images-idx3-ubyte").open("rb") as data_file, ( - root_dir / "t10k-labels-idx1-ubyte" - ).open("rb") as label_file: - _, data_number, height, width = struct.unpack(">IIII", data_file.read(16)) - _, label_number = struct.unpack(">II", label_file.read(8)) - print( - f">data({data_file.name}) split data:{data_number}, label:{label_number} group" - ) - image_size = height * width - - for i in range(0, min(data_number, label_number)): - _data = data_file.read(image_size) - _label = struct.unpack(">B", label_file.read(1))[0] - yield GrayscaleImage( + from starwhale import GrayscaleImage + + def iter_swds_bin_item(): + root_dir = Path(__file__).parent.parent / "data" + + with (root_dir / "t10k-images-idx3-ubyte").open("rb") as data_file, ( + root_dir / "t10k-labels-idx1-ubyte" + ).open("rb") as label_file: + _, data_number, height, width = struct.unpack(">IIII", data_file.read(16)) + _, label_number = struct.unpack(">II", label_file.read(8)) + print( + f">data({data_file.name}) split data:{data_number}, label:{label_number} group" + ) + image_size = height * width + + for i in range(0, min(data_number, label_number)): + _data = data_file.read(image_size) + _label = struct.unpack(">B", label_file.read(1))[0] + yield { + "img": GrayscaleImage( _data, display_name=f"{i}", shape=(height, width, 1), - ), {"label": _label} - ``` - - - Define `dataset.yaml`. - - ```yaml - name: mnist - handler: mnist.dataset:DatasetProcessExecutor - attr: - alignment_size: 1k - volume_size: 4M - data_mime_type: "x/grayscale" - ``` + ), + "label": _label, + } + ``` - Run one command to build the dataset. - ```bash - swcli dataset build . + ```bash + swcli dataset build example/mnist --handler mnist.dataset:iter_swds_bin_item --runtime pytorch/version/latest swcli dataset info mnist/version/latest - ``` + swcli dataset head mnist/version/latest + ``` Starwhale also supports build dataset with pure python sdk. You can try it in [Google Colab](https://colab.research.google.com/github/star-whale/starwhale/blob/main/example/notebooks/dataset-sdk.ipynb). - 🍖 **STEP6**: Running an evaluation job - ```bash - swcli -vvv eval run --model mnist/version/latest --dataset mnist/version/latest --runtime pytorch/version/latest + ```bash + swcli eval run --model mnist/version/latest --dataset mnist/version/latest --runtime pytorch/version/latest swcli eval list - swcli eval info ${version} - ``` - -👏 Now, you have completed the fundamental steps for Starwhale standalone. + swcli eval info $(swcli eval list | grep mnist | grep success | awk '{print $1}' | head -n 1) + ``` -Let's go ahead and finish the tutorial on the on-premises instance. +**👏 Now, you have completed the fundamental steps for Starwhale standalone. Let's go ahead and finish the tutorial on the on-premises instance.** ## MNIST Quick Tour for on-premises instance @@ -284,11 +283,11 @@ Let's go ahead and finish the tutorial on the on-premises instance. minikube start ``` - > For users in the mainland of China, please add the startup parameter:`--image-mirror-country=cn`. - - ```bash - minikube start --image-mirror-country=cn - ``` + > For users in the mainland of China, please add some external parameters. The following command was well tested; you may also try another kubernetes version. + > + > ```bash + > minikube start --image-mirror-country=cn --kubernetes-version=1.25.3 + > ``` If there is no kubectl bin in your machine, you may use `minikube kubectl` or `alias kubectl="minikube kubectl --"` alias command. @@ -401,16 +400,16 @@ Let's go ahead and finish the tutorial on the on-premises instance.
Show the uploaded artifacts screenshots - ![console-artifacts.gif](../img/console-artifacts.gif) + ![Console Artifacts](docs/docs/img/console-artifacts.gif)
3. Create and view an evaluation job
Show create job screenshot - ![console-create-job.gif](../img/console-create-job.gif) + ![Console Create Job](docs/docs/img/console-create-job.gif)
-**Congratulations! You have completed the evaluation process for a model.** +**👏 Congratulations! You have completed the evaluation process for a model.** ## Documentation, Community, and Support @@ -423,7 +422,7 @@ Let's go ahead and finish the tutorial on the on-premises instance. - Python Package on [Pypi](https://pypi.org/project/starwhale/). - Helm Charts on [Artifacthub](https://artifacthub.io/packages/helm/starwhale/starwhale). - - Docker Images on [Docker Hub](https://hub.docker.com/u/starwhaleai) and [ghcr.io](https://github.com/orgs/star-whale/packages). + - Docker Images on [Docker Hub](https://hub.docker.com/u/starwhaleai), [Github Packages](https://github.com/orgs/star-whale/packages) and [Starwhale Registry](https://docker-registry.starwhale.cn/). - Additionally, you can always find us at *developer@starwhale.ai*. diff --git a/docs/docs/quickstart/on-premises.md b/docs/docs/quickstart/on-premises.md index 67800c5e8f..02ecd67725 100644 --- a/docs/docs/quickstart/on-premises.md +++ b/docs/docs/quickstart/on-premises.md @@ -31,7 +31,13 @@ In this tutorial, minikube is used instead of the standard Kubernetes cluster minikube start ``` -For users in the mainland of China, please add these startup parameters to speedup download rate:`--image-mirror-country=cn`. If there is no kubectl bin in your machine, you may use `minikube kubectl` or `alias kubectl="minikube kubectl --"` alias command. +For users in the mainland of China, please add some external parameters. The following command was well tested; you may also try another kubernetes version. + +```bash +minikube start --image-mirror-country=cn --kubernetes-version=1.25.3 +``` + +If there is no kubectl bin in your machine, you may use `minikube kubectl` or `alias kubectl="minikube kubectl --"` alias command. ### 1.3 Installing Starwhale diff --git a/docs/docs/quickstart/standalone.md b/docs/docs/quickstart/standalone.md index a7a38ac583..02d8e1e989 100644 --- a/docs/docs/quickstart/standalone.md +++ b/docs/docs/quickstart/standalone.md @@ -2,10 +2,10 @@ title: Standalone Quickstart --- -![Core Workflow](../img/standalone-core-workflow.gif) - **This tutorial is also available as a [Colab Notebook](https://colab.research.google.com/github/star-whale/starwhale/blob/main/example/notebooks/quickstart-standalone.ipynb).** +![Core Workflow](../img/standalone-core-workflow.gif) + ## 1. Installing Starwhale Starwhale has three types of instances: Standalone, On-Premises, and Cloud Hosted. Starting with the standalone mode is ideal for quickly understanding and mastering Starwhale. @@ -20,7 +20,7 @@ You can install the alpha version by the `--pre` argument. ::: :::note -Starwhale standalone requires Python 3.7~3.10. Currently, Starwhale only supports Linux and macOS. Windows is coming soon. +Starwhale standalone requires Python 3.7~3.11. Currently, Starwhale only supports Linux and macOS. Windows is coming soon. ::: At the installation stage, we strongly recommend you follow the [doc](../guides/install/standalone.md). @@ -40,10 +40,24 @@ We will use ML/DL HelloWorld code `MNIST` to start your Starwhale journey. The f Runtime example code are in the `example/runtime/pytorch` directory. -- Build the Starwhale Runtime bundle: +- Build the Starwhale Runtime bundle:. + + :::tip + When you first build runtime, creating an isolated python environment and downloading python dependencies will take a lot of time. The command execution time is related to the network environment of the machine and the number of packages in the runtime.yaml. Using the befitting pypi mirror and cache config in the `~/.pip/pip.conf` file is a recommended practice. + + For users in the mainland of China, the following conf file is an option: + + ```conf + [global] + cache-dir = ~/.cache/pip + index-url = https://mirrors.aliyun.com/pypi/simple/ + extra-index-url = https://pypi.doubanio.com/simple + ``` + + ::: ```bash - swcli runtime build . + swcli runtime build example/runtime/pytorch ``` - Check your local Starwhale Runtime: @@ -53,17 +67,33 @@ Runtime example code are in the `example/runtime/pytorch` directory. swcli runtime info pytorch/version/latest ``` +- Restore Starwhale Runtime(optional): + + ```bash + swcli runtime restore pytorch/version/latest + ``` + ## 4. Building Model Model example code are in the `example/mnist` directory. -- Build a Starwhale Model: +- Download pre-trained model file: + + ```bash + cd example/mnist + make download-model + # For users in the mainland of China, please add `CN=1` environment for make command: + # CN=1 make download-model + cd - + ``` + +- Build a Starwhale Model with prebuilt Starwhale Runtime: ```bash - swcli model build . + swcli model build example/mnist --runtime pytorch/version/latest ``` -- Check your local Starwhale Model. +- Check your local Starwhale Model: ```bash swcli model list @@ -77,20 +107,17 @@ Dataset example code are in the `example/mnist` directory. - Download the MNIST raw data: ```bash - mkdir -p data && cd data - wget http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz - wget http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz - wget http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz - wget http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz - gzip -d *.gz - cd .. - ls -lah data/* + cd example/mnist + make download-data + # For users in the mainland of China, please add `CN=1` environment for make command: + # CN=1 make download-data + cd - ``` -- Build a Starwhale Dataset: +- Build a Starwhale Dataset with prebuilt Starwhale Runtime: ```bash - swcli dataset build . + swcli dataset build example/mnist --runtime pytorch/version/latest ``` - Check your local Starwhale Dataset: @@ -100,23 +127,25 @@ Dataset example code are in the `example/mnist` directory. swcli dataset info mnist/version/latest ``` +- Head some records from the mnist dataset: + + ```bash + swcli dataset head mnist/version/latest + ``` + ## 6. Running an Evaluation Job -- Create an evaluation job: +- Create an evaluation job - ```bash - swcli -vvv eval run --model mnist/version/latest --dataset mnist/version/latest --runtime pytorch/version/latest - ``` + ```bash + swcli eval run --model mnist/version/latest --dataset mnist/version/latest --runtime pytorch/version/latest + ``` - Check the evaluation result: ```bash swcli eval list - swcli eval info ${version} + swcli eval info $(swcli eval list | grep mnist | grep success | awk '{print $1}' | head -n 1) ``` -:::tip -When you first use Runtime in the eval run command which maybe use a lot of time to create isolated python environment, download python dependencies. Use the befitting pypi mirror in the `~/.pip/pip.conf` file is a recommend practice. -::: - 👏 Now, you have completed the basic steps for Starwhale standalone. diff --git a/docs/i18n/zh/docusaurus-plugin-content-docs/current/quickstart/on-premises.md b/docs/i18n/zh/docusaurus-plugin-content-docs/current/quickstart/on-premises.md index c6e2252d73..7bf6533436 100644 --- a/docs/i18n/zh/docusaurus-plugin-content-docs/current/quickstart/on-premises.md +++ b/docs/i18n/zh/docusaurus-plugin-content-docs/current/quickstart/on-premises.md @@ -18,10 +18,10 @@ Starwhale Cloud 有两种形态,一种是私有化到用户独立集群的On-P ### 1.2 启动Minikube ```bash -minikube start --image-mirror-country=cn +minikube start --image-mirror-country=cn --kubernetes-version=1.25.3 ``` -对于非中国大陆网络环境,可以去掉 `--image-mirror-country=cn` 参数,直接使用 `minikube start` 即可。另外如果本机没有 `kubectl` 命令,可以使用 `minikube kubectl` 代替,也可以采用 `alias kubectl="minikube kubectl --"` 命令,在当前终端中提供 `kubectl` 命令的alias。 +上面命令中使用 `--kubernetes-version=1.25.3` 参数来固定安装版本,该版本是经过测试确保cn mirror中已经存在的,用户也可以使用尝试其他版本。对于非中国大陆网络环境,可以去掉 `--image-mirror-country=cn --kubernetes-version=1.25.3` 参数,直接使用 `minikube start` 命令即可。另外如果本机没有 `kubectl` 命令,可以使用 `minikube kubectl` 代替,也可以采用 `alias kubectl="minikube kubectl --"` 命令,在当前终端中提供 `kubectl` 命令的alias。 ### 1.3 使用Helm安装Starwhale Cloud diff --git a/docs/i18n/zh/docusaurus-plugin-content-docs/current/quickstart/standalone.md b/docs/i18n/zh/docusaurus-plugin-content-docs/current/quickstart/standalone.md index 7741951690..00303cac68 100644 --- a/docs/i18n/zh/docusaurus-plugin-content-docs/current/quickstart/standalone.md +++ b/docs/i18n/zh/docusaurus-plugin-content-docs/current/quickstart/standalone.md @@ -2,10 +2,10 @@ title: Standalone 快速上手 --- -![Core Workflow](../img/standalone-core-workflow.gif) - **本教程也提供Jupyter Notebook版本,可以在[Colab Notebook](https://colab.research.google.com/github/star-whale/starwhale/blob/main/example/notebooks/quickstart-standalone.ipynb)中在线体验。** +![Core Workflow](../img/standalone-core-workflow.gif) + ## 1. 安装Starwhale CLI Starwhale 有三种类型的Instances:Standalone-单机、On-Premises-私有化集群、Cloud Hosted-SaaS托管服务。Standalone是最简单的模式,可以从Standalone开启你的Starwhale MLOps之旅。Starwhale Standalone 是用Python3编写的,可以通过pip命令安装: @@ -20,7 +20,7 @@ python3 -m pip install starwhale 系统环境要求: -- Python:3.7 ~ 3.10 +- Python:3.7 ~ 3.11 - 操作系统:Linux或macOS 推荐阅读[Standalone 安装建议](../guides/install/standalone.md)。 @@ -32,12 +32,6 @@ git clone https://github.com/star-whale/starwhale.git cd starwhale ``` -如果本机环境中之前没有安装过[git-lfs](https://git-lfs.github.com/)(命令为`git lfs install`),需要手工下载训练好的mnist.pt文件。 - -```bash -wget https://media.githubusercontent.com/media/star-whale/starwhale/main/example/mnist/models/mnist_cnn.pt -O example/mnist/models/mnist_cnn.pt -``` - 我们选用ML/DL领域的HelloWorld程序-MNIST来介绍如何从零开始构建数据集、模型包和运行环境,并最终完成模型评测。接下来的操作都在 `starwhale` 目录中进行。 ## 3. 构建Starwhale Runtime运行环境 @@ -46,9 +40,23 @@ Runtime的示例程序在 `example/runtime/pytorch` 目录中。 - 构建Starwhale Runtime: + :::tip + + 当首次构建Starwhale Runtime时,由于需要创建venv或conda隔离环境,并下载相关的Python依赖,命令执行需要花费一段时间。时间长短取决与所在机器的网络情况和runtime.yaml中Python依赖的数量。建议合理设置机器的 `~/.pip/pip.conf` 文件,填写缓存路径和适合当前网络环境的pypi mirror地址。 + + 处于中国大陆网络环境中的用户,可以参考如下配置: + + ```conf + [global] + cache-dir = ~/.cache/pip + index-url = https://mirrors.aliyun.com/pypi/simple/ + extra-index-url = https://pypi.doubanio.com/simple + ``` + + ::: + ```bash - cd example/runtime/pytorch - swcli runtime build . + swcli runtime build example/runtime/pytorch ``` - 检查构建好的Starwhale Runtime: @@ -58,14 +66,30 @@ Runtime的示例程序在 `example/runtime/pytorch` 目录中。 swcli runtime info pytorch/version/latest ``` +- 预先restore Starwhale Runtime(可选): + + ```bash + swcli runtime restore pytorch/version/latest + ``` + ## 4. 构建Starwhale Model模型包 Model的示例程序在 `example/mnist` 目录中。 -- 构建Starwhale Model: +- 下载预先训练好的模型文件: + + ```bash + cd example/mnist + CN=1 make download-model + # 对于非中国大陆网络环境用户,可以去掉make命令前的 `CN=1` 环境变量 + # make download-model + cd - + ``` + +- 使用Starwhale Runtime来构建Starwhale Model: ```bash - swcli model build . + swcli model build example/mnist --runtime pytorch/version/latest ``` - 检查构建好的Starwhale Runtime: @@ -82,18 +106,17 @@ Dataset的示例程序在 `example/mnist` 目录中。 - 下载MNIST原始数据: ```bash - mkdir -p data && cd data - wget http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz - wget http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz - gzip -d *.gz - cd .. - ls -lah data/* + cd example/mnist + CN=1 make download-data + # 对于非中国大陆网络环境用户,可以去掉make命令前的 `CN=1` 环境变量 + # make download-data + cd - ``` - 构建Starwhale Dataset: ```bash - swcli dataset build . + swcli dataset build example/mnist --runtime pytorch/version/latest ``` - 检查构建好的Starwhale Dataset: @@ -103,23 +126,25 @@ Dataset的示例程序在 `example/mnist` 目录中。 swcli dataset info mnist/version/latest ``` +- 查看数据集的前几条数据: + + ```bash + swcli dataset head mnist/version/latest + ``` + ## 6. 运行模型评测任务 - 运行模型评测任务: ```bash - swcli -vvv eval run --model mnist/version/latest --dataset mnist/version/latest --runtime pytorch/version/latest + swcli eval run --model mnist/version/latest --dataset mnist/version/latest --runtime pytorch/version/latest ``` - 查看模型评测结果: ```bash swcli eval list - swcli eval info ${version} + swcli eval info $(swcli eval list | grep mnist | grep success | awk '{print $1}' | head -n 1) ``` -:::tip -Runtime首次使用的时候会创建隔离的python环境并安装依赖,可能会用时较长,同时建议合理设置 ~/.pip/pip.conf 文件,选用下载速度快的pypi mirror地址。 -::: - 👏 恭喜,目前已经完成了Starwhale Standalone的基本操作任务。 diff --git a/example/notebooks/quickstart-standalone.ipynb b/example/notebooks/quickstart-standalone.ipynb index 7987ea3c4c..4eed42f63b 100644 --- a/example/notebooks/quickstart-standalone.ipynb +++ b/example/notebooks/quickstart-standalone.ipynb @@ -33,7 +33,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Getting Starwhale version and installing shell completion(optional)." + "Getting Starwhale version and help info." ] }, { @@ -45,7 +45,7 @@ "%%bash\n", "\n", "swcli --version\n", - "swcli completion install" + "swcli --help " ] }, { @@ -191,7 +191,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "- Check your local Starwhale Model." + "- Check your local Starwhale Model:" ] }, { @@ -307,7 +307,7 @@ "source": [ "%%bash\n", "\n", - "swcli -vvv eval run --model mnist/version/latest --dataset mnist/version/latest --runtime pytorch/version/latest" + "swcli eval run --model mnist/version/latest --dataset mnist/version/latest --runtime pytorch/version/latest" ] }, { @@ -347,7 +347,7 @@ }, "language_info": { "name": "python", - "version": "3.9.13 (main, Oct 13 2022, 21:15:33) \n[GCC 11.2.0]" + "version": "3.9.13" }, "orig_nbformat": 4, "vscode": {