diff --git a/content/cn/docs/quickstart/hugegraph-hubble.md b/content/cn/docs/quickstart/hugegraph-hubble.md index 1aad48008..eb76cd3ae 100644 --- a/content/cn/docs/quickstart/hugegraph-hubble.md +++ b/content/cn/docs/quickstart/hugegraph-hubble.md @@ -32,7 +32,115 @@ HugeGraph是一款面向分析型,支持批量操作的图数据库系统, 对于需要遍历全图的Gremlin任务,索引的创建与重建等耗时较长的异步任务,平台提供相应的任务管理功能,实现异步任务的统一的管理与结果查看。 -### 2 平台使用流程 +#### 2 部署 + +有三种方式可以部署`hugegraph-hubble` +- 下载 toolchain 二进制包 +- 源码编译 +- 使用docker + +#### 2.1 下载 toolchain 二进制包 + +`hubble`项目在`toolchain`项目中, 首先下载`toolchain`的tar包 + +```bash +wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-{version}.tar.gz +tar -xvf apache-hugegraph-toolchain-incubating-{version}.tar.gz +cd apache-hugegraph-toolchain-incubating-{version}.tar.gz/apache-hugegraph-hubble-incubating-{version} +``` + +运行`hubble` + +```bash +bin/start-hubble.sh +``` + +随后我们可以看到 + +```shell +starting HugeGraphHubble ..............timed out with http status 502 +2023-08-30 20:38:34 [main] [INFO ] o.a.h.HugeGraphHubble [] - Starting HugeGraphHubble v1.0.0 on cpu05 with PID xxx (~/apache-hugegraph-toolchain-incubating-1.0.0/apache-hugegraph-hubble-incubating-1.0.0/lib/hubble-be-1.0.0.jar started by $USER in ~/apache-hugegraph-toolchain-incubating-1.0.0/apache-hugegraph-hubble-incubating-1.0.0) +... +2023-08-30 20:38:38 [main] [INFO ] c.z.h.HikariDataSource [] - hugegraph-hubble-HikariCP - Start completed. +2023-08-30 20:38:41 [main] [INFO ] o.a.c.h.Http11NioProtocol [] - Starting ProtocolHandler ["http-nio-0.0.0.0-8088"] +2023-08-30 20:38:41 [main] [INFO ] o.a.h.HugeGraphHubble [] - Started HugeGraphHubble in 7.379 seconds (JVM running for 8.499) +``` + +然后使用浏览器访问 `ip:8088` 可看到`hubble`页面, 通过`bin/stop-hubble.sh`则可以停止服务 + +#### 2.2 源码编译 + +**注意:** 编译 hubble 需要用户本地环境有安装 `Nodejs V16.x` 与 `yarn` 环境 + +```bash +apt install curl build-essential +curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash +source ~/.bashrc +nvm install 16 +``` + +然后确认安装版本是否为 `16.x` (请注意过高的 Node 版本会产生冲突) + +```bash +node -v +``` + +使用下列命令安装 `yarn` + +```bash +npm install -g yarn +``` + +下载toolchain源码包 + +```shell +git clone https://github.com/apache/hugegraph-toolchain.git +``` + +编译`hubble`, 它依赖 loader 和 client, 编译时需提前构建这些依赖 (后续可跳) + +```shell +cd incubator-hugegraph-toolchain +sudo pip install -r hugegraph-hubble/hubble-dist/assembly/travis/requirements.txt +mvn install -pl hugegraph-client,hugegraph-loader -am -Dmaven.javadoc.skip=true -DskipTests -ntp +cd hugegraph-hubble +mvn -e compile package -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -ntp +cd apache-hugegraph-hubble-incubating* +``` + +启动`hubble` + +```bash +bin/start-hubble.sh -d +``` + +#### 2.3 使用Docker + +> **特别注意**: 如果使用 docker 启动 hubble,且 hubble 和 server 位于同一宿主机,在后续 hubble 页面中设置 graph 的 hostname 的时候请不要直接设置 `localhost/127.0.0.1`,这将指向 hubble 容器内部而非宿主机,导致无法连接到 server + +我们可以使用 `docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble` 快速启动 [hubble](https://hub.docker.com/r/hugegraph/hubble). + +或者使用docker-compose启动hubble,另外如果hubble和graph在同一个docker网络下,可以使用graph的contain_name进行访问,而不需要宿主机的ip + +使用`docker-compose up -d`,`docker-compose.yml`如下: + +```yaml +version: '3' +services: + graph_hubble: + image: hugegraph/hugegraph + container_name: graph + ports: + - 18080:8080 + + hubble: + image: hugegraph/hubble + container_name: hubble + ports: + - 8088:8088 +``` + +### 3 平台使用流程 平台的模块使用流程如下: @@ -41,9 +149,9 @@ HugeGraph是一款面向分析型,支持批量操作的图数据库系统, -### 3 平台使用说明 -#### 3.1 图管理 -##### 3.1.1 图创建 +### 4 平台使用说明 +#### 4.1 图管理 +##### 4.1.1 图创建 图管理模块下,点击【创建图】,通过填写图ID、图名称、主机名、端口号、用户名、密码的信息,实现多图的连接。
@@ -58,7 +166,7 @@ HugeGraph是一款面向分析型,支持批量操作的图数据库系统,
-##### 3.1.2 图访问 +##### 4.1.2 图访问 实现图空间的信息访问,进入后,可进行图的多维查询分析、元数据管理、数据导入、算法分析等操作。
@@ -66,7 +174,7 @@ HugeGraph是一款面向分析型,支持批量操作的图数据库系统,
-##### 3.1.3 图管理 +##### 4.1.3 图管理 1. 用户通过对图的概览、搜索以及单图的信息编辑与删除,实现图的统一管理。 2. 搜索范围:可对图名称和ID进行搜索。 @@ -75,8 +183,8 @@ HugeGraph是一款面向分析型,支持批量操作的图数据库系统, -#### 3.2 元数据建模(列表+图模式) -##### 3.2.1 模块入口 +#### 4.2 元数据建模(列表+图模式) +##### 4.2.1 模块入口 左侧导航处:
@@ -84,8 +192,8 @@ HugeGraph是一款面向分析型,支持批量操作的图数据库系统,
-##### 3.2.2 属性类型 -###### 3.2.2.1 创建 +##### 4.2.2 属性类型 +###### 4.2.2.1 创建 1. 填写或选择属性名称、数据类型、基数,完成属性的创建。 2. 创建的属性可作为顶点类型和边类型的属性。 @@ -103,7 +211,7 @@ HugeGraph是一款面向分析型,支持批量操作的图数据库系统, -###### 3.2.2.2 复用 +###### 4.2.2.2 复用 1. 平台提供【复用】功能,可直接复用其他图的元数据。 2. 选择需要复用的图ID,继续选择需要复用的属性,之后平台会进行是否冲突的校验,通过后,可实现元数据的复用。 @@ -121,11 +229,11 @@ HugeGraph是一款面向分析型,支持批量操作的图数据库系统, -###### 3.2.2.3 管理 +###### 4.2.2.3 管理 1. 在属性列表中可进行单条删除或批量删除操作。 -##### 3.2.3 顶点类型 -###### 3.2.3.1 创建 +##### 4.2.3 顶点类型 +###### 4.2.3.1 创建 1. 填写或选择顶点类型名称、ID策略、关联属性、主键属性,顶点样式、查询结果中顶点下方展示的内容,以及索引的信息:包括是否创建类型索引,及属性索引的具体内容,完成顶点类型的创建。 列表模式: @@ -142,11 +250,11 @@ HugeGraph是一款面向分析型,支持批量操作的图数据库系统, -###### 3.2.3.2 复用 +###### 4.2.3.2 复用 1. 顶点类型的复用,会将此类型关联的属性和属性索引一并复用。 2. 复用功能使用方法类似属性的复用,见3.2.2.2。 -###### 3.2.3.3 管理 +###### 4.2.3.3 管理 1. 可进行编辑操作,顶点样式、关联类型、顶点展示内容、属性索引可编辑,其余不可编辑。 @@ -157,8 +265,8 @@ HugeGraph是一款面向分析型,支持批量操作的图数据库系统, -##### 3.2.4 边类型 -###### 3.2.4.1 创建 +##### 4.2.4 边类型 +###### 4.2.4.1 创建 1. 填写或选择边类型名称、起点类型、终点类型、关联属性、是否允许多次连接、边样式、查询结果中边下方展示的内容,以及索引的信息:包括是否创建类型索引,及属性索引的具体内容,完成边类型的创建。 列表模式: @@ -175,19 +283,19 @@ HugeGraph是一款面向分析型,支持批量操作的图数据库系统, -###### 3.2.4.2 复用 +###### 4.2.4.2 复用 1. 边类型的复用,会将此类型的起点类型、终点类型、关联的属性和属性索引一并复用。 2. 复用功能使用方法类似属性的复用,见3.2.2.2。 -###### 3.2.4.3 管理 +###### 4.2.4.3 管理 1. 可进行编辑操作,边样式、关联属性、边展示内容、属性索引可编辑,其余不可编辑,同顶点类型。 2. 可进行单条删除或批量删除操作。 -##### 3.2.5 索引类型 +##### 4.2.5 索引类型 展示顶点类型和边类型的顶点索引和边索引。 -#### 3.3 数据导入 +#### 4.3 数据导入 数据导入的使用流程如下:
@@ -195,14 +303,14 @@ HugeGraph是一款面向分析型,支持批量操作的图数据库系统,
-##### 3.3.1 模块入口 +##### 4.3.1 模块入口 左侧导航处:
image
-##### 3.3.2 创建任务 +##### 4.3.2 创建任务 1. 填写任务名称和备注(非必填),可以创建导入任务。 2. 可创建多个导入任务,并行导入。 @@ -211,7 +319,7 @@ HugeGraph是一款面向分析型,支持批量操作的图数据库系统, -##### 3.3.3 上传文件 +##### 4.3.3 上传文件 1. 上传需要构图的文件,目前支持的格式为CSV,后续会不断更新。 2. 可同时上传多个文件。 @@ -220,7 +328,7 @@ HugeGraph是一款面向分析型,支持批量操作的图数据库系统, -##### 3.3.4 设置数据映射 +##### 4.3.4 设置数据映射 1. 对上传的文件分别设置数据映射,包括文件设置和类型设置 2. 文件设置:勾选或填写是否包含表头、分隔符、编码格式等文件本身的设置内容,均设置默认值,无需手动填写 3. 类型设置: @@ -247,7 +355,7 @@ HugeGraph是一款面向分析型,支持批量操作的图数据库系统, -##### 3.3.5 导入数据 +##### 4.3.5 导入数据 导入前需要填写导入设置参数,填写完成后,可开始向图库中导入数据 1. 导入设置 - 导入设置参数项如下图所示,均设置默认值,无需手动填写 @@ -267,22 +375,22 @@ HugeGraph是一款面向分析型,支持批量操作的图数据库系统, -#### 3.4 数据分析 -##### 3.4.1 模块入口 +#### 4.4 数据分析 +##### 4.4.1 模块入口 左侧导航处:
image
-##### 3.4.2 多图切换 +##### 4.4.2 多图切换 通过左侧切换入口,灵活切换多图的操作空间
image
-##### 3.4.3 图分析与处理 +##### 4.4.3 图分析与处理 HugeGraph支持Apache TinkerPop3的图遍历查询语言Gremlin,Gremlin是一种通用的图数据库查询语言,通过输入Gremlin语句,点击执行,即可执行图数据的查询分析操作,并可实现顶点/边的创建及删除、顶点/边的属性修改等。 Gremlin查询后,下方为图结果展示区域,提供3种图结果展示方式,分别为:【图模式】、【表格模式】、【Json模式】。 @@ -308,11 +416,11 @@ Gremlin查询后,下方为图结果展示区域,提供3种图结果展示方 -##### 3.4.4 数据详情 +##### 4.4.4 数据详情 点击顶点/边实体,可查看顶点/边的数据详情,包括:顶点/边类型,顶点ID,属性及对应值,拓展图的信息展示维度,提高易用性。 -##### 3.4.5 图结果的多维路径查询 +##### 4.4.5 图结果的多维路径查询 除了全局的查询外,可针对查询结果中的顶点进行深度定制化查询以及隐藏操作,实现图结果的定制化挖掘。 右击顶点,出现顶点的菜单入口,可进行展示、查询、隐藏等操作。 @@ -327,8 +435,8 @@ Gremlin查询后,下方为图结果展示区域,提供3种图结果展示方 -##### 3.4.6 新增顶点/边 -###### 3.4.6.1 新增顶点 +##### 4.4.6 新增顶点/边 +###### 4.4.6.1 新增顶点 在图区可通过两个入口,动态新增顶点,如下: 1. 点击图区面板,出现添加顶点入口 2. 点击右上角的操作栏中的首个图标 @@ -349,11 +457,11 @@ Gremlin查询后,下方为图结果展示区域,提供3种图结果展示方 -###### 3.4.6.2 新增边 +###### 4.4.6.2 新增边 右击图结果中的顶点,可增加该点的出边或者入边。 -##### 3.4.7 执行记录与收藏的查询 +##### 4.4.7 执行记录与收藏的查询 1. 图区下方记载每次查询记录,包括:查询时间、执行类型、内容、状态、耗时、以及【收藏】和【加载】操作,实现图执行的全方位记录,有迹可循,并可对执行内容快速加载复用 2. 提供语句的收藏功能,可对常用语句进行收藏操作,方便高频语句快速调用 @@ -362,15 +470,15 @@ Gremlin查询后,下方为图结果展示区域,提供3种图结果展示方 -#### 3.5 任务管理 -##### 3.5.1 模块入口 +#### 4.5 任务管理 +##### 4.5.1 模块入口 左侧导航处:
image
-##### 3.5.2 任务管理 +##### 4.5.2 任务管理 1. 提供异步任务的统一的管理与结果查看,异步任务包括4类,分别为: - gremlin:Gremlin任务 - algorithm:OLAP算法任务 @@ -386,7 +494,7 @@ Gremlin查询后,下方为图结果展示区域,提供3种图结果展示方 -##### 3.5.3 Gremlin异步任务 +##### 4.5.3 Gremlin异步任务 1.创建任务 - 数据分析模块,目前支持两种Gremlin操作,Gremlin查询和Gremlin任务;若用户切换到Gremlin任务,点击执行后,在异步任务中心会建立一条异步任务; @@ -411,10 +519,10 @@ Gremlin查询后,下方为图结果展示区域,提供3种图结果展示方 - 结果通过json形式展示 -##### 3.5.4 OLAP算法任务 +##### 4.5.4 OLAP算法任务 Hubble上暂未提供可视化的OLAP算法执行,可调用RESTful API进行OLAP类算法任务,在任务管理中通过ID找到相应任务,查看进度与结果等。 -##### 3.5.5 删除元数据、重建索引 +##### 4.5.5 删除元数据、重建索引 1.创建任务 - 在元数据建模模块中,删除元数据时,可建立删除元数据的异步任务 diff --git a/content/cn/docs/quickstart/hugegraph-server.md b/content/cn/docs/quickstart/hugegraph-server.md index 371e153d8..9d985b726 100644 --- a/content/cn/docs/quickstart/hugegraph-server.md +++ b/content/cn/docs/quickstart/hugegraph-server.md @@ -465,6 +465,13 @@ $bin/stop-hugegraph.sh ### 9 在启动 Server 时创建示例图 +有三种方式可以在启动 Server 时创建示例图 +- 方式一: 直接修改配置文件 +- 方式二: 启动脚本使用命令行参数 +- 方式三: 使用docker或docker-compose添加环境变量 + +#### 9.1 直接修改配置文件 + 修改 `conf/gremlin-server.yaml`,将 `empty-sample.groovy` 修改为 `example.groovy`: ```yaml @@ -518,4 +525,62 @@ schema = graph.schema() 代表创建示例图成功。 -> 使用 IntelliJ IDEA 在启动 Server 时创建示例图的流程类似,不再赘述。 \ No newline at end of file +> 使用 IntelliJ IDEA 在启动 Server 时创建示例图的流程类似,不再赘述。 + + +#### 9.2 启动脚本时指定参数 + +在脚本启动时候携带 `-p true` 参数, 表示preload, 即创建示例图 + +``` +bin/start-hugegraph.sh -p true +Starting HugeGraphServer in daemon mode... +Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)......OK +``` + +并且使用 RESTful API 请求 `HugeGraphServer` 得到如下结果: + +```javascript +> curl "http://localhost:8080/graphs/hugegraph/graph/vertices" | gunzip + +{"vertices":[{"id":"2:lop","label":"software","type":"vertex","properties":{"name":"lop","lang":"java","price":328}},{"id":"1:josh","label":"person","type":"vertex","properties":{"name":"josh","age":32,"city":"Beijing"}},{"id":"1:marko","label":"person","type":"vertex","properties":{"name":"marko","age":29,"city":"Beijing"}},{"id":"1:peter","label":"person","type":"vertex","properties":{"name":"peter","age":35,"city":"Shanghai"}},{"id":"1:vadas","label":"person","type":"vertex","properties":{"name":"vadas","age":27,"city":"Hongkong"}},{"id":"2:ripple","label":"software","type":"vertex","properties":{"name":"ripple","lang":"java","price":199}}]} +``` + +代表创建示例图成功。 + + +#### 9.3 使用docker启动 + +在docker启动的时候设置环境变量 `PRELOAD=true`, 从而实现启动脚本的时候加载数据。 + +1. 使用`docker run` + + 使用 `docker run -itd --name=graph -p 18080:8080 -e PRELOAD=true hugegraph/hugegraph:latest` + +2. 使用`docker-compose` + + 创建`docker-compose.yml`,具体文件如下 + + ```yaml + version: '3' + services: + graph: + image: hugegraph/hugegraph:latest + container_name: graph + environment: + - PRELOAD=true + ports: + - 18080:8080 + ``` + + 使用命令 `docker-compose up -d` 启动容器 + +使用 RESTful API 请求 `HugeGraphServer` 得到如下结果: + +```javascript +> curl "http://localhost:18080/graphs/hugegraph/graph/vertices" | gunzip + +{"vertices":[{"id":"2:lop","label":"software","type":"vertex","properties":{"name":"lop","lang":"java","price":328}},{"id":"1:josh","label":"person","type":"vertex","properties":{"name":"josh","age":32,"city":"Beijing"}},{"id":"1:marko","label":"person","type":"vertex","properties":{"name":"marko","age":29,"city":"Beijing"}},{"id":"1:peter","label":"person","type":"vertex","properties":{"name":"peter","age":35,"city":"Shanghai"}},{"id":"1:vadas","label":"person","type":"vertex","properties":{"name":"vadas","age":27,"city":"Hongkong"}},{"id":"2:ripple","label":"software","type":"vertex","properties":{"name":"ripple","lang":"java","price":199}}]} +``` + +代表创建示例图成功。 diff --git a/content/en/docs/quickstart/hugegraph-hubble.md b/content/en/docs/quickstart/hugegraph-hubble.md index dc48a8dde..1b086f75c 100644 --- a/content/en/docs/quickstart/hugegraph-hubble.md +++ b/content/en/docs/quickstart/hugegraph-hubble.md @@ -32,7 +32,116 @@ By inputting the graph traversal language Gremlin, high-performance general anal For Gremlin tasks that need to traverse the whole graph, index creation and reconstruction and other time-consuming asynchronous tasks, the platform provides corresponding task management functions to achieve unified management and result viewing of asynchronous tasks. -### 2 Platform Workflow +### 2 Deploy + +There are three ways to deplot `hugegraph-hubble` +- Download the Toolchain binary package +- Source code compilation +- Use Docker + +#### 2.1 Download the Toolchain binary package + +`hubble` is in the `toolchain` project. First, download the binary tar tarball + +```bash +wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-{version}.tar.gz +tar -xvf apache-hugegraph-toolchain-incubating-{version}.tar.gz +cd apache-hugegraph-toolchain-incubating-{version}.tar.gz/apache-hugegraph-hubble-incubating-{version} +``` + +Run `hubble` + +```bash +bin/start-hubble.sh +``` + +Then, we can see: + +```shell +starting HugeGraphHubble ..............timed out with http status 502 +2023-08-30 20:38:34 [main] [INFO ] o.a.h.HugeGraphHubble [] - Starting HugeGraphHubble v1.0.0 on cpu05 with PID xxx (~/apache-hugegraph-toolchain-incubating-1.0.0/apache-hugegraph-hubble-incubating-1.0.0/lib/hubble-be-1.0.0.jar started by $USER in ~/apache-hugegraph-toolchain-incubating-1.0.0/apache-hugegraph-hubble-incubating-1.0.0) +... +2023-08-30 20:38:38 [main] [INFO ] c.z.h.HikariDataSource [] - hugegraph-hubble-HikariCP - Start completed. +2023-08-30 20:38:41 [main] [INFO ] o.a.c.h.Http11NioProtocol [] - Starting ProtocolHandler ["http-nio-0.0.0.0-8088"] +2023-08-30 20:38:41 [main] [INFO ] o.a.h.HugeGraphHubble [] - Started HugeGraphHubble in 7.379 seconds (JVM running for 8.499) +``` + +Then use a web browser to access `ip:8088` and you can see the `Hubble` page. You can stop the service using bin/stop-hubble.sh. + +#### 2.2 Source code compilation + +**Note**: Compiling Hubble requires the user's local environment to have Node.js V16.x and yarn installed. + +```bash +apt install curl build-essential +curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash +source ~/.bashrc +nvm install 16 +``` + +Then, verify that the installed Node.js version is 16.x (please note that higher Node version may cause conflicts). + +```bash +node -v +``` + +install `yarn` by the command below: + +```bash +npm install -g yarn +``` + +Download the toolchain source code. + +```shell +git clone https://github.com/apache/hugegraph-toolchain.git +``` + +Compile `hubble`. It depends on the loader and client, so you need to build these dependencies in advance during the compilation process (you can skip this step later). + +```shell +cd incubator-hugegraph-toolchain +sudo pip install -r hugegraph-hubble/hubble-dist/assembly/travis/requirements.txt +mvn install -pl hugegraph-client,hugegraph-loader -am -Dmaven.javadoc.skip=true -DskipTests -ntp +cd hugegraph-hubble +mvn -e compile package -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -ntp +cd apache-hugegraph-hubble-incubating* +``` + +Run `hubble` + +```bash +bin/start-hubble.sh -d +``` + +#### 2.3 Use docker + +> **Special Note**: If you are starting `hubble` with Docker, and `hubble` and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to `localhost/127.0.0.1`. This will refer to the `hubble` container internally rather than the host machine, resulting in a connection failure to the server. + +We can use `docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble` to quick start [hubble](https://hub.docker.com/r/hugegraph/hubble). + +Alternatively, you can use Docker Compose to start `hubble`. Additionally, if `hubble` and the graph are in the same Docker network, you can access the graph using the container name of the graph, eliminating the need for the host machine's IP address. + +Use `docker-compose up -d`,`docker-compose.yml` is following: + +```yaml +version: '3' +services: + graph_hubble: + image: hugegraph/hugegraph + container_name: graph + ports: + - 18080:8080 + + hubble: + image: hugegraph/hubble + container_name: hubble + ports: + - 8088:8088 +``` + + +### 3 Platform Workflow The module usage process of the platform is as follows: @@ -41,9 +150,9 @@ The module usage process of the platform is as follows: -### 3 Platform Instructions -#### 3.1 Graph Management -##### 3.1.1 Graph creation +### 4 Platform Instructions +#### 4.1 Graph Management +##### 4.1.1 Graph creation Under the graph management module, click [Create graph], and realize the connection of multiple graphs by filling in the graph ID, graph name, host name, port number, username, and password information.
@@ -58,7 +167,7 @@ Create graph by filling in the content as follows:: -##### 3.1.2 Graph Access +##### 4.1.2 Graph Access Realize the information access of the graph space. After entering, you can perform operations such as multidimensional query analysis, metadata management, data import, and algorithm analysis of the graph.
@@ -66,7 +175,7 @@ Realize the information access of the graph space. After entering, you can perfo
-##### 3.1.3 Graph management +##### 4.1.3 Graph management 1. Users can achieve unified management of graphs through overview, search, and information editing and deletion of single graphs. 2. Search range: You can search for the graph name and ID. @@ -75,8 +184,8 @@ Realize the information access of the graph space. After entering, you can perfo -#### 3.2 Metadata Modeling (list + graph mode) -##### 3.2.1 Module entry +#### 4.2 Metadata Modeling (list + graph mode) +##### 4.2.1 Module entry Left navigation:
@@ -84,8 +193,8 @@ Left navigation:
-##### 3.2.2 Property type -###### 3.2.2.1 Create type +##### 4.2.2 Property type +###### 4.2.2.1 Create type 1. Fill in or select the attribute name, data type, and cardinality to complete the creation of the attribute. 2. Created attributes can be used as attributes of vertex type and edge type. @@ -103,7 +212,7 @@ Graph mode: -###### 3.2.2.2 Reuse +###### 4.2.2.2 Reuse 1. The platform provides the [Reuse] function, which can directly reuse the metadata of other graphs. 2. Select the graph ID that needs to be reused, and continue to select the attributes that need to be reused. After that, the platform will check whether there is a conflict. After passing, the metadata can be reused. @@ -121,11 +230,11 @@ Check reuse items: -###### 3.2.2.3 Management +###### 4.2.2.3 Management 1. You can delete a single item or delete it in batches in the attribute list. -##### 3.2.3 Vertex type -###### 3.2.3.1 Create type +##### 4.2.3 Vertex type +###### 4.2.3.1 Create type 1. Fill in or select the vertex type name, ID strategy, association attribute, primary key attribute, vertex style, content displayed below the vertex in the query result, and index information: including whether to create a type index, and the specific content of the attribute index, complete the vertex Type creation. List mode: @@ -141,11 +250,11 @@ Graph mode: image -###### 3.2.3.2 Reuse +###### 4.2.3.2 Reuse 1. The multiplexing of vertex types will reuse the attributes and attribute indexes associated with this type together. 2. The reuse method is similar to the property reuse, see 3.2.2.2. -###### 3.2.3.3 Administration +###### 4.2.3.3 Administration 1. Editing operations are available. The vertex style, association type, vertex display content, and attribute index can be edited, and the rest cannot be edited. 2. You can delete a single item or delete it in batches. @@ -155,8 +264,8 @@ Graph mode: -##### 3.2.4 Edge Types -###### 3.2.4.1 Create +##### 4.2.4 Edge Types +###### 4.2.4.1 Create 1. Fill in or select the edge type name, start point type, end point type, associated attributes, whether to allow multiple connections, edge style, content displayed below the edge in the query result, and index information: including whether to create a type index, and attribute index The specific content, complete the creation of the edge type. List mode: @@ -173,19 +282,19 @@ Graph mode: -###### 3.2.4.2 Reuse +###### 4.2.4.2 Reuse 1. The reuse of the edge type will reuse the start point type, end point type, associated attribute and attribute index of this type. 2. The reuse method is similar to the property reuse, see 3.2.2.2. -###### 3.2.4.3 Administration +###### 4.2.4.3 Administration 1. Editing operations are available. Edge styles, associated attributes, edge display content, and attribute indexes can be edited, and the rest cannot be edited, the same as the vertex type. 2. You can delete a single item or delete it in batches. -##### 3.2.5 Index Types +##### 4.2.5 Index Types Displays vertex and edge indices for vertex types and edge types. -#### 3.3 Data Import +#### 4.3 Data Import The usage process of data import is as follows:
@@ -193,14 +302,14 @@ The usage process of data import is as follows:
-##### 3.3.1 Module entrance +##### 4.3.1 Module entrance Left navigation:
image
-##### 3.3.2 Create task +##### 4.3.2 Create task 1. Fill in the task name and remarks (optional) to create an import task. 2. Multiple import tasks can be created and imported in parallel. @@ -209,7 +318,7 @@ Left navigation: -##### 3.3.3 Uploading files +##### 4.3.3 Uploading files 1. Upload the file that needs to be composed. The currently supported format is CSV, which will be updated continuously in the future. 2. Multiple files can be uploaded at the same time. @@ -218,7 +327,7 @@ Left navigation: -##### 3.3.4 Setting up data mapping +##### 4.3.4 Setting up data mapping 1. Set up data mapping for uploaded files, including file settings and type settings 2. File settings: Check or fill in whether to include the header, separator, encoding format and other settings of the file itself, all set the default values, no need to fill in manually 3. Type setting: @@ -245,7 +354,7 @@ Mapping list: -##### 3.3.5 Import data +##### 4.3.5 Import data Before importing, you need to fill in the import setting parameters. After filling in, you can start importing data into the gallery. 1. Import settings - The import setting parameter items are as shown in the figure below, all set the default value, no need to fill in manually @@ -265,22 +374,22 @@ Before importing, you need to fill in the import setting parameters. After filli -#### 3.4 Data Analysis -##### 3.4.1 Module entry +#### 4.4 Data Analysis +##### 4.4.1 Module entry Left navigation:
image
-##### 3.4.2 Multi-image switching +##### 4.4.2 Multi-image switching By switching the entrance on the left, flexibly switch the operation space of multiple graphs
image
-##### 3.4.3 Graph Analysis and Processing +##### 4.4.3 Graph Analysis and Processing HugeGraph supports Gremlin, a graph traversal query language of Apache TinkerPop3. Gremlin is a general graph database query language. By entering Gremlin statements and clicking execute, you can perform query and analysis operations on graph data, and create and delete vertices/edges. , vertex/edge attribute modification, etc. After Gremlin query, below is the graph result display area, which provides 3 kinds of graph result display modes: [Graph Mode], [Table Mode], [Json Mode]. @@ -305,11 +414,11 @@ Support zoom, center, full screen, export and other operations. -##### 3.4.4 Data Details +##### 4.4.4 Data Details Click the vertex/edge entity to view the data details of the vertex/edge, including: vertex/edge type, vertex ID, attribute and corresponding value, expand the information display dimension of the graph, and improve the usability. -##### 3.4.5 Multidimensional Path Query of Graph Results +##### 4.4.5 Multidimensional Path Query of Graph Results In addition to the global query, in-depth customized query and hidden operations can be performed for the vertices in the query result to realize customized mining of graph results. Right-click a vertex, and the menu entry of the vertex appears, which can be displayed, inquired, hidden, etc. @@ -324,8 +433,8 @@ Double-clicking a vertex also displays the vertex associated with the selected p -##### 3.4.6 Add vertex/edge -###### 3.4.6.1 Added vertex +##### 4.4.6 Add vertex/edge +###### 4.4.6.1 Added vertex In the graph area, two entries can be used to dynamically add vertices, as follows: 1. Click on the graph area panel, the Add Vertex entry appears 2. Click the first icon in the action bar in the upper right corner @@ -346,11 +455,11 @@ Add the vertex content as follows: -###### 3.4.6.2 Add edge +###### 4.4.6.2 Add edge Right-click a vertex in the graph result to add the outgoing or incoming edge of that point. -##### 3.4.7 Execute the query of records and favorites +##### 4.4.7 Execute the query of records and favorites 1. Record each query record at the bottom of the graph area, including: query time, execution type, content, status, time-consuming, as well as [collection] and [load] operations, to achieve a comprehensive record of graph execution, with traces to follow, and Can quickly load and reuse execution content 2. Provides the function of collecting sentences, which can be used to collect frequently used sentences, which is convenient for fast calling of high-frequency sentences. @@ -359,15 +468,15 @@ Right-click a vertex in the graph result to add the outgoing or incoming edge of -#### 3.5 Task Management -##### 3.5.1 Module entry +#### 4.5 Task Management +##### 4.5.1 Module entry Left navigation:
image
-##### 3.5.2 Task Management +##### 4.5.2 Task Management 1. Provide unified management and result viewing of asynchronous tasks. There are 4 types of asynchronous tasks, namely: - gremlin: Gremlin tasks - algorithm: OLAP algorithm task @@ -383,7 +492,7 @@ Left navigation: -##### 3.5.3 Gremlin asynchronous tasks +##### 4.5.3 Gremlin asynchronous tasks 1. Create a task - The data analysis module currently supports two Gremlin operations, Gremlin query and Gremlin task; if the user switches to the Gremlin task, after clicking execute, an asynchronous task will be created in the asynchronous task center; @@ -408,10 +517,10 @@ Click to view the entry to jump to the task management list, as follows: - The results are displayed in the form of json -##### 3.5.4 OLAP algorithm tasks +##### 4.5.4 OLAP algorithm tasks There is no visual OLAP algorithm execution on Hubble. You can call the RESTful API to perform OLAP algorithm tasks, find the corresponding tasks by ID in the task management, and view the progress and results. -##### 3.5.5 Delete metadata, rebuild index +##### 4.5.5 Delete metadata, rebuild index 1. Create a task - In the metadata modeling module, when deleting metadata, an asynchronous task for deleting metadata can be created diff --git a/content/en/docs/quickstart/hugegraph-server.md b/content/en/docs/quickstart/hugegraph-server.md index b53e8f4f7..a34566812 100644 --- a/content/en/docs/quickstart/hugegraph-server.md +++ b/content/en/docs/quickstart/hugegraph-server.md @@ -480,6 +480,13 @@ Please refer to [Setup Server in IDEA](/docs/contribution-guidelines/hugegraph-s ### 9 Create Sample Graph on Server Startup +There are three ways to create sample graph on server startup +- Method 1: Modify the configuration file directly. +- Method 2: Use command-line arguments in the startup script. +- Method 3: Use Docker or Docker Compose to add environment variables. + +#### 9.1 Modify the configuration file directly. + Modify `conf/gremlin-server.yaml` and change `empty-sample.groovy` to `example.groovy`: ```yaml @@ -523,4 +530,62 @@ And when using the RESTful API to request `HugeGraphServer`, you receive the fol indicating the successful creation of the sample graph. -> The process of creating sample graph on server startup is similar when using IntelliJ IDEA and will not be described further. \ No newline at end of file +> The process of creating sample graph on server startup is similar when using IntelliJ IDEA and will not be described further. + + +#### 9.2 Specify command-line arguments in the startup script. + +Carry the `-p true` arguments when starting the script, which indicates `preload`, to create a sample graph. + +``` +bin/start-hugegraph.sh -p true +Starting HugeGraphServer in daemon mode... +Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)......OK +``` + +And use the RESTful API to request `HugeGraphServer` and get the following result: + +```javascript +> curl "http://localhost:8080/graphs/hugegraph/graph/vertices" | gunzip + +{"vertices":[{"id":"2:lop","label":"software","type":"vertex","properties":{"name":"lop","lang":"java","price":328}},{"id":"1:josh","label":"person","type":"vertex","properties":{"name":"josh","age":32,"city":"Beijing"}},{"id":"1:marko","label":"person","type":"vertex","properties":{"name":"marko","age":29,"city":"Beijing"}},{"id":"1:peter","label":"person","type":"vertex","properties":{"name":"peter","age":35,"city":"Shanghai"}},{"id":"1:vadas","label":"person","type":"vertex","properties":{"name":"vadas","age":27,"city":"Hongkong"}},{"id":"2:ripple","label":"software","type":"vertex","properties":{"name":"ripple","lang":"java","price":199}}]} +``` + +This indicates the successful creation of the sample graph. + + +#### 9.3 Use Docker or Docker Compose to add environment variables. + +Set the environment variable `PRELOAD=true` when starting Docker in order to load data during the execution of the startup script. + +1. Use `docker run` + + Use `docker run -itd --name=graph -p 18080:8080 -e PRELOAD=true hugegraph/hugegraph:latest` + +2. Use `docker-compose` + + Create `docker-compose.yml` as following + + ```yaml + version: '3' + services: + graph: + image: hugegraph/hugegraph:latest + container_name: graph + environment: + - PRELOAD=true + ports: + - 18080:8080 + ``` + + Use `docker-compose up -d` to start the container + +And use the RESTful API to request `HugeGraphServer` and get the following result: + +```javascript +> curl "http://localhost:8080/graphs/hugegraph/graph/vertices" | gunzip + +{"vertices":[{"id":"2:lop","label":"software","type":"vertex","properties":{"name":"lop","lang":"java","price":328}},{"id":"1:josh","label":"person","type":"vertex","properties":{"name":"josh","age":32,"city":"Beijing"}},{"id":"1:marko","label":"person","type":"vertex","properties":{"name":"marko","age":29,"city":"Beijing"}},{"id":"1:peter","label":"person","type":"vertex","properties":{"name":"peter","age":35,"city":"Shanghai"}},{"id":"1:vadas","label":"person","type":"vertex","properties":{"name":"vadas","age":27,"city":"Hongkong"}},{"id":"2:ripple","label":"software","type":"vertex","properties":{"name":"ripple","lang":"java","price":199}}]} +``` + +This indicates the successful creation of the sample graph.