diff --git a/content/cn/docs/quickstart/hugegraph-hubble.md b/content/cn/docs/quickstart/hugegraph-hubble.md
index 1aad48008..eb76cd3ae 100644
--- a/content/cn/docs/quickstart/hugegraph-hubble.md
+++ b/content/cn/docs/quickstart/hugegraph-hubble.md
@@ -32,7 +32,115 @@ HugeGraph是一款面向分析型,支持批量操作的图数据库系统,
对于需要遍历全图的Gremlin任务,索引的创建与重建等耗时较长的异步任务,平台提供相应的任务管理功能,实现异步任务的统一的管理与结果查看。
-### 2 平台使用流程
+#### 2 部署
+
+有三种方式可以部署`hugegraph-hubble`
+- 下载 toolchain 二进制包
+- 源码编译
+- 使用docker
+
+#### 2.1 下载 toolchain 二进制包
+
+`hubble`项目在`toolchain`项目中, 首先下载`toolchain`的tar包
+
+```bash
+wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-{version}.tar.gz
+tar -xvf apache-hugegraph-toolchain-incubating-{version}.tar.gz
+cd apache-hugegraph-toolchain-incubating-{version}.tar.gz/apache-hugegraph-hubble-incubating-{version}
+```
+
+运行`hubble`
+
+```bash
+bin/start-hubble.sh
+```
+
+随后我们可以看到
+
+```shell
+starting HugeGraphHubble ..............timed out with http status 502
+2023-08-30 20:38:34 [main] [INFO ] o.a.h.HugeGraphHubble [] - Starting HugeGraphHubble v1.0.0 on cpu05 with PID xxx (~/apache-hugegraph-toolchain-incubating-1.0.0/apache-hugegraph-hubble-incubating-1.0.0/lib/hubble-be-1.0.0.jar started by $USER in ~/apache-hugegraph-toolchain-incubating-1.0.0/apache-hugegraph-hubble-incubating-1.0.0)
+...
+2023-08-30 20:38:38 [main] [INFO ] c.z.h.HikariDataSource [] - hugegraph-hubble-HikariCP - Start completed.
+2023-08-30 20:38:41 [main] [INFO ] o.a.c.h.Http11NioProtocol [] - Starting ProtocolHandler ["http-nio-0.0.0.0-8088"]
+2023-08-30 20:38:41 [main] [INFO ] o.a.h.HugeGraphHubble [] - Started HugeGraphHubble in 7.379 seconds (JVM running for 8.499)
+```
+
+然后使用浏览器访问 `ip:8088` 可看到`hubble`页面, 通过`bin/stop-hubble.sh`则可以停止服务
+
+#### 2.2 源码编译
+
+**注意:** 编译 hubble 需要用户本地环境有安装 `Nodejs V16.x` 与 `yarn` 环境
+
+```bash
+apt install curl build-essential
+curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
+source ~/.bashrc
+nvm install 16
+```
+
+然后确认安装版本是否为 `16.x` (请注意过高的 Node 版本会产生冲突)
+
+```bash
+node -v
+```
+
+使用下列命令安装 `yarn`
+
+```bash
+npm install -g yarn
+```
+
+下载toolchain源码包
+
+```shell
+git clone https://github.com/apache/hugegraph-toolchain.git
+```
+
+编译`hubble`, 它依赖 loader 和 client, 编译时需提前构建这些依赖 (后续可跳)
+
+```shell
+cd incubator-hugegraph-toolchain
+sudo pip install -r hugegraph-hubble/hubble-dist/assembly/travis/requirements.txt
+mvn install -pl hugegraph-client,hugegraph-loader -am -Dmaven.javadoc.skip=true -DskipTests -ntp
+cd hugegraph-hubble
+mvn -e compile package -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -ntp
+cd apache-hugegraph-hubble-incubating*
+```
+
+启动`hubble`
+
+```bash
+bin/start-hubble.sh -d
+```
+
+#### 2.3 使用Docker
+
+> **特别注意**: 如果使用 docker 启动 hubble,且 hubble 和 server 位于同一宿主机,在后续 hubble 页面中设置 graph 的 hostname 的时候请不要直接设置 `localhost/127.0.0.1`,这将指向 hubble 容器内部而非宿主机,导致无法连接到 server
+
+我们可以使用 `docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble` 快速启动 [hubble](https://hub.docker.com/r/hugegraph/hubble).
+
+或者使用docker-compose启动hubble,另外如果hubble和graph在同一个docker网络下,可以使用graph的contain_name进行访问,而不需要宿主机的ip
+
+使用`docker-compose up -d`,`docker-compose.yml`如下:
+
+```yaml
+version: '3'
+services:
+ graph_hubble:
+ image: hugegraph/hugegraph
+ container_name: graph
+ ports:
+ - 18080:8080
+
+ hubble:
+ image: hugegraph/hubble
+ container_name: hubble
+ ports:
+ - 8088:8088
+```
+
+### 3 平台使用流程
平台的模块使用流程如下:
@@ -41,9 +149,9 @@ HugeGraph是一款面向分析型,支持批量操作的图数据库系统,
-### 3 平台使用说明
-#### 3.1 图管理
-##### 3.1.1 图创建
+### 4 平台使用说明
+#### 4.1 图管理
+##### 4.1.1 图创建
图管理模块下,点击【创建图】,通过填写图ID、图名称、主机名、端口号、用户名、密码的信息,实现多图的连接。
@@ -58,7 +167,7 @@ Create graph by filling in the content as follows::
-##### 3.1.2 Graph Access
+##### 4.1.2 Graph Access
Realize the information access of the graph space. After entering, you can perform operations such as multidimensional query analysis, metadata management, data import, and algorithm analysis of the graph.
@@ -66,7 +175,7 @@ Realize the information access of the graph space. After entering, you can perfo
-##### 3.1.3 Graph management
+##### 4.1.3 Graph management
1. Users can achieve unified management of graphs through overview, search, and information editing and deletion of single graphs.
2. Search range: You can search for the graph name and ID.
@@ -75,8 +184,8 @@ Realize the information access of the graph space. After entering, you can perfo
-#### 3.2 Metadata Modeling (list + graph mode)
-##### 3.2.1 Module entry
+#### 4.2 Metadata Modeling (list + graph mode)
+##### 4.2.1 Module entry
Left navigation:
@@ -84,8 +193,8 @@ Left navigation:
-##### 3.2.2 Property type
-###### 3.2.2.1 Create type
+##### 4.2.2 Property type
+###### 4.2.2.1 Create type
1. Fill in or select the attribute name, data type, and cardinality to complete the creation of the attribute.
2. Created attributes can be used as attributes of vertex type and edge type.
@@ -103,7 +212,7 @@ Graph mode:
-###### 3.2.2.2 Reuse
+###### 4.2.2.2 Reuse
1. The platform provides the [Reuse] function, which can directly reuse the metadata of other graphs.
2. Select the graph ID that needs to be reused, and continue to select the attributes that need to be reused. After that, the platform will check whether there is a conflict. After passing, the metadata can be reused.
@@ -121,11 +230,11 @@ Check reuse items:
-###### 3.2.2.3 Management
+###### 4.2.2.3 Management
1. You can delete a single item or delete it in batches in the attribute list.
-##### 3.2.3 Vertex type
-###### 3.2.3.1 Create type
+##### 4.2.3 Vertex type
+###### 4.2.3.1 Create type
1. Fill in or select the vertex type name, ID strategy, association attribute, primary key attribute, vertex style, content displayed below the vertex in the query result, and index information: including whether to create a type index, and the specific content of the attribute index, complete the vertex Type creation.
List mode:
@@ -141,11 +250,11 @@ Graph mode:
-###### 3.2.3.2 Reuse
+###### 4.2.3.2 Reuse
1. The multiplexing of vertex types will reuse the attributes and attribute indexes associated with this type together.
2. The reuse method is similar to the property reuse, see 3.2.2.2.
-###### 3.2.3.3 Administration
+###### 4.2.3.3 Administration
1. Editing operations are available. The vertex style, association type, vertex display content, and attribute index can be edited, and the rest cannot be edited.
2. You can delete a single item or delete it in batches.
@@ -155,8 +264,8 @@ Graph mode:
-##### 3.2.4 Edge Types
-###### 3.2.4.1 Create
+##### 4.2.4 Edge Types
+###### 4.2.4.1 Create
1. Fill in or select the edge type name, start point type, end point type, associated attributes, whether to allow multiple connections, edge style, content displayed below the edge in the query result, and index information: including whether to create a type index, and attribute index The specific content, complete the creation of the edge type.
List mode:
@@ -173,19 +282,19 @@ Graph mode:
-###### 3.2.4.2 Reuse
+###### 4.2.4.2 Reuse
1. The reuse of the edge type will reuse the start point type, end point type, associated attribute and attribute index of this type.
2. The reuse method is similar to the property reuse, see 3.2.2.2.
-###### 3.2.4.3 Administration
+###### 4.2.4.3 Administration
1. Editing operations are available. Edge styles, associated attributes, edge display content, and attribute indexes can be edited, and the rest cannot be edited, the same as the vertex type.
2. You can delete a single item or delete it in batches.
-##### 3.2.5 Index Types
+##### 4.2.5 Index Types
Displays vertex and edge indices for vertex types and edge types.
-#### 3.3 Data Import
+#### 4.3 Data Import
The usage process of data import is as follows:
@@ -193,14 +302,14 @@ The usage process of data import is as follows:
-##### 3.3.1 Module entrance
+##### 4.3.1 Module entrance
Left navigation:
-##### 3.3.2 Create task
+##### 4.3.2 Create task
1. Fill in the task name and remarks (optional) to create an import task.
2. Multiple import tasks can be created and imported in parallel.
@@ -209,7 +318,7 @@ Left navigation:
-##### 3.3.3 Uploading files
+##### 4.3.3 Uploading files
1. Upload the file that needs to be composed. The currently supported format is CSV, which will be updated continuously in the future.
2. Multiple files can be uploaded at the same time.
@@ -218,7 +327,7 @@ Left navigation:
-##### 3.3.4 Setting up data mapping
+##### 4.3.4 Setting up data mapping
1. Set up data mapping for uploaded files, including file settings and type settings
2. File settings: Check or fill in whether to include the header, separator, encoding format and other settings of the file itself, all set the default values, no need to fill in manually
3. Type setting:
@@ -245,7 +354,7 @@ Mapping list:
-##### 3.3.5 Import data
+##### 4.3.5 Import data
Before importing, you need to fill in the import setting parameters. After filling in, you can start importing data into the gallery.
1. Import settings
- The import setting parameter items are as shown in the figure below, all set the default value, no need to fill in manually
@@ -265,22 +374,22 @@ Before importing, you need to fill in the import setting parameters. After filli
-#### 3.4 Data Analysis
-##### 3.4.1 Module entry
+#### 4.4 Data Analysis
+##### 4.4.1 Module entry
Left navigation:
-##### 3.4.2 Multi-image switching
+##### 4.4.2 Multi-image switching
By switching the entrance on the left, flexibly switch the operation space of multiple graphs
-##### 3.4.3 Graph Analysis and Processing
+##### 4.4.3 Graph Analysis and Processing
HugeGraph supports Gremlin, a graph traversal query language of Apache TinkerPop3. Gremlin is a general graph database query language. By entering Gremlin statements and clicking execute, you can perform query and analysis operations on graph data, and create and delete vertices/edges. , vertex/edge attribute modification, etc.
After Gremlin query, below is the graph result display area, which provides 3 kinds of graph result display modes: [Graph Mode], [Table Mode], [Json Mode].
@@ -305,11 +414,11 @@ Support zoom, center, full screen, export and other operations.
-##### 3.4.4 Data Details
+##### 4.4.4 Data Details
Click the vertex/edge entity to view the data details of the vertex/edge, including: vertex/edge type, vertex ID, attribute and corresponding value, expand the information display dimension of the graph, and improve the usability.
-##### 3.4.5 Multidimensional Path Query of Graph Results
+##### 4.4.5 Multidimensional Path Query of Graph Results
In addition to the global query, in-depth customized query and hidden operations can be performed for the vertices in the query result to realize customized mining of graph results.
Right-click a vertex, and the menu entry of the vertex appears, which can be displayed, inquired, hidden, etc.
@@ -324,8 +433,8 @@ Double-clicking a vertex also displays the vertex associated with the selected p
-##### 3.4.6 Add vertex/edge
-###### 3.4.6.1 Added vertex
+##### 4.4.6 Add vertex/edge
+###### 4.4.6.1 Added vertex
In the graph area, two entries can be used to dynamically add vertices, as follows:
1. Click on the graph area panel, the Add Vertex entry appears
2. Click the first icon in the action bar in the upper right corner
@@ -346,11 +455,11 @@ Add the vertex content as follows:
-###### 3.4.6.2 Add edge
+###### 4.4.6.2 Add edge
Right-click a vertex in the graph result to add the outgoing or incoming edge of that point.
-##### 3.4.7 Execute the query of records and favorites
+##### 4.4.7 Execute the query of records and favorites
1. Record each query record at the bottom of the graph area, including: query time, execution type, content, status, time-consuming, as well as [collection] and [load] operations, to achieve a comprehensive record of graph execution, with traces to follow, and Can quickly load and reuse execution content
2. Provides the function of collecting sentences, which can be used to collect frequently used sentences, which is convenient for fast calling of high-frequency sentences.
@@ -359,15 +468,15 @@ Right-click a vertex in the graph result to add the outgoing or incoming edge of
-#### 3.5 Task Management
-##### 3.5.1 Module entry
+#### 4.5 Task Management
+##### 4.5.1 Module entry
Left navigation:
-##### 3.5.2 Task Management
+##### 4.5.2 Task Management
1. Provide unified management and result viewing of asynchronous tasks. There are 4 types of asynchronous tasks, namely:
- gremlin: Gremlin tasks
- algorithm: OLAP algorithm task
@@ -383,7 +492,7 @@ Left navigation:
-##### 3.5.3 Gremlin asynchronous tasks
+##### 4.5.3 Gremlin asynchronous tasks
1. Create a task
- The data analysis module currently supports two Gremlin operations, Gremlin query and Gremlin task; if the user switches to the Gremlin task, after clicking execute, an asynchronous task will be created in the asynchronous task center;
@@ -408,10 +517,10 @@ Click to view the entry to jump to the task management list, as follows:
- The results are displayed in the form of json
-##### 3.5.4 OLAP algorithm tasks
+##### 4.5.4 OLAP algorithm tasks
There is no visual OLAP algorithm execution on Hubble. You can call the RESTful API to perform OLAP algorithm tasks, find the corresponding tasks by ID in the task management, and view the progress and results.
-##### 3.5.5 Delete metadata, rebuild index
+##### 4.5.5 Delete metadata, rebuild index
1. Create a task
- In the metadata modeling module, when deleting metadata, an asynchronous task for deleting metadata can be created
diff --git a/content/en/docs/quickstart/hugegraph-server.md b/content/en/docs/quickstart/hugegraph-server.md
index b53e8f4f7..a34566812 100644
--- a/content/en/docs/quickstart/hugegraph-server.md
+++ b/content/en/docs/quickstart/hugegraph-server.md
@@ -480,6 +480,13 @@ Please refer to [Setup Server in IDEA](/docs/contribution-guidelines/hugegraph-s
### 9 Create Sample Graph on Server Startup
+There are three ways to create sample graph on server startup
+- Method 1: Modify the configuration file directly.
+- Method 2: Use command-line arguments in the startup script.
+- Method 3: Use Docker or Docker Compose to add environment variables.
+
+#### 9.1 Modify the configuration file directly.
+
Modify `conf/gremlin-server.yaml` and change `empty-sample.groovy` to `example.groovy`:
```yaml
@@ -523,4 +530,62 @@ And when using the RESTful API to request `HugeGraphServer`, you receive the fol
indicating the successful creation of the sample graph.
-> The process of creating sample graph on server startup is similar when using IntelliJ IDEA and will not be described further.
\ No newline at end of file
+> The process of creating sample graph on server startup is similar when using IntelliJ IDEA and will not be described further.
+
+
+#### 9.2 Specify command-line arguments in the startup script.
+
+Carry the `-p true` arguments when starting the script, which indicates `preload`, to create a sample graph.
+
+```
+bin/start-hugegraph.sh -p true
+Starting HugeGraphServer in daemon mode...
+Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)......OK
+```
+
+And use the RESTful API to request `HugeGraphServer` and get the following result:
+
+```javascript
+> curl "http://localhost:8080/graphs/hugegraph/graph/vertices" | gunzip
+
+{"vertices":[{"id":"2:lop","label":"software","type":"vertex","properties":{"name":"lop","lang":"java","price":328}},{"id":"1:josh","label":"person","type":"vertex","properties":{"name":"josh","age":32,"city":"Beijing"}},{"id":"1:marko","label":"person","type":"vertex","properties":{"name":"marko","age":29,"city":"Beijing"}},{"id":"1:peter","label":"person","type":"vertex","properties":{"name":"peter","age":35,"city":"Shanghai"}},{"id":"1:vadas","label":"person","type":"vertex","properties":{"name":"vadas","age":27,"city":"Hongkong"}},{"id":"2:ripple","label":"software","type":"vertex","properties":{"name":"ripple","lang":"java","price":199}}]}
+```
+
+This indicates the successful creation of the sample graph.
+
+
+#### 9.3 Use Docker or Docker Compose to add environment variables.
+
+Set the environment variable `PRELOAD=true` when starting Docker in order to load data during the execution of the startup script.
+
+1. Use `docker run`
+
+ Use `docker run -itd --name=graph -p 18080:8080 -e PRELOAD=true hugegraph/hugegraph:latest`
+
+2. Use `docker-compose`
+
+ Create `docker-compose.yml` as following
+
+ ```yaml
+ version: '3'
+ services:
+ graph:
+ image: hugegraph/hugegraph:latest
+ container_name: graph
+ environment:
+ - PRELOAD=true
+ ports:
+ - 18080:8080
+ ```
+
+ Use `docker-compose up -d` to start the container
+
+And use the RESTful API to request `HugeGraphServer` and get the following result:
+
+```javascript
+> curl "http://localhost:8080/graphs/hugegraph/graph/vertices" | gunzip
+
+{"vertices":[{"id":"2:lop","label":"software","type":"vertex","properties":{"name":"lop","lang":"java","price":328}},{"id":"1:josh","label":"person","type":"vertex","properties":{"name":"josh","age":32,"city":"Beijing"}},{"id":"1:marko","label":"person","type":"vertex","properties":{"name":"marko","age":29,"city":"Beijing"}},{"id":"1:peter","label":"person","type":"vertex","properties":{"name":"peter","age":35,"city":"Shanghai"}},{"id":"1:vadas","label":"person","type":"vertex","properties":{"name":"vadas","age":27,"city":"Hongkong"}},{"id":"2:ripple","label":"software","type":"vertex","properties":{"name":"ripple","lang":"java","price":199}}]}
+```
+
+This indicates the successful creation of the sample graph.