diff --git a/client-go/index.html b/client-go/index.html index 8f849c210..792935cd7 100644 --- a/client-go/index.html +++ b/client-go/index.html @@ -1,16 +1,10 @@ - - - - - - + + + - - diff --git a/cn/docs/_print/index.html b/cn/docs/_print/index.html index 435d00d73..659de58be 100644 --- a/cn/docs/_print/index.html +++ b/cn/docs/_print/index.html @@ -7,14 +7,15 @@ 并支持大规模分布式图分析(OLAP)。

HugeGraph典型应用场景包括深度关系探索、关联分析、路径搜索、特征抽取、数据聚类、社区检测、 知识图谱等,适用业务领域有如网络安全、电信诈骗、金融风控、广告推荐、社交网络和智能机器人等。

本系统的主要应用场景是解决反欺诈、威胁情报、黑产打击等业务的图数据存储和建模分析需求,在此基础上逐步扩展及支持了更多的通用图应用。

Features

HugeGraph支持在线及离线环境下的图操作,支持批量导入数据,支持高效的复杂关联关系分析,并且能够与大数据平台无缝集成。 HugeGraph支持多用户并行操作,用户可输入Gremlin查询语句,并及时得到图查询结果,也可在用户程序中调用HugeGraph API进行图分析或查询。

本系统具备如下特点:

本系统的功能包括但不限于:

Modules

Contact Us

QR png

2 - Download HugeGraph

Latest version

The latest HugeGraph: 1.0.0, released on 2023-02-22(how to build from source).

componentsdescriptiondownload
HugeGraph-ServerHugeGraph 的主程序1.0.0(备用)
HugeGraph-Toolchain数据导入/导出/备份, Web 可视化图形界面等工具合集1.0.0(备用)

Binary Versions mapping

VersionRelease DateservertoolchaincomputerRelease Notes
1.0.02023-02-22[Binary] [Sign] [SHA512][Binary] [Sign] [SHA512][Binary] [Sign] [SHA512]Release-Notes

Source Versions mapping

VersionRelease DateservertoolchaincomputercommonRelease Notes
1.0.02023-02-22[Source] [Sign] [SHA512][Source] [Sign] [SHA512][Source] [Sign] [SHA512][Source] [Sign] [SHA512]Release-Notes

旧版本下载地址 (Outdated Versions Mapping)

serverclientloaderhubblecommontools
0.12.02.0.10.12.01.6.02.0.11.6.0
0.11.21.9.10.11.11.5.01.8.11.5.0
0.10.41.8.00.10.10.10.01.6.161.4.0
0.9.21.7.00.9.00.9.01.6.01.3.0
0.8.01.6.40.8.00.8.01.5.31.2.0
0.7.41.5.80.7.00.7.01.4.91.1.0
0.6.11.5.60.6.10.6.11.4.31.0.0
0.5.61.5.00.5.60.5.01.4.0
0.4.51.4.70.2.20.4.11.3.12

说明:最新的图分析和展示平台为 hubble,支持 0.10 及之后的 server 版本;studio 为 server 0.10.x 以及之前的版本的图分析和展示平台,其功能从 0.10 起不再更新。

Release Notes (old version)

3 - Quick Start

3.1 - HugeGraph-Server Quick Start

1 HugeGraph-Server 概述

HugeGraph-Server 是 HugeGraph 项目的核心部分,包含 Core、Backend、API 等子模块。

Core 模块是 Tinkerpop 接口的实现,Backend 模块用于管理数据存储,目前支持的后端包括:Memory、Cassandra、ScyllaDB 以及 RocksDB,API 模块提供 HTTP Server,将 Client 的 HTTP 请求转化为对 Core 的调用。

文档中会大量出现 HugeGraph-ServerHugeGraphServer 这两种写法,其他组件也类似。这两种写法含义上并无大的差异,可以这么区分:HugeGraph-Server 表示服务端相关组件代码,HugeGraphServer 表示服务进程。

2 依赖

2.1 安装 Java 11 (JDK 11)

请优先考虑在 Java 11 的环境上启动 HugeGraph-Server,目前同时保留对 Java 8 的兼容

在往下阅读之前务必执行 java -version 命令查看 jdk 版本

java -version
-

3 部署

有四种方式可以部署 HugeGraph-Server 组件:

3.1 使用 Docker 容器 (推荐)

可参考 Docker 部署方式

我们可以使用 docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph 去快速启动一个内置了 RocksDBHugegraph server.

可选项:

  1. 可以使用 docker exec -it graph bash 进入容器完成一些操作
  2. 可以使用 docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph 在启动的时候预加载一个 内置的样例图。

另外,我们也可以使用 docker-compose完成部署,使用用 docker-compose up -d, 以下是一个样例的 docker-compose.yml:

version: '3'
+

3 部署

有四种方式可以部署 HugeGraph-Server 组件:

3.1 使用 Docker 容器 (推荐)

可参考 Docker 部署方式

我们可以使用 docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph 去快速启动一个内置了 RocksDBHugegraph server.

可选项:

  1. 可以使用 docker exec -it graph bash 进入容器完成一些操作
  2. 可以使用 docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph 在启动的时候预加载一个内置的样例图。可以通过 RESTful API 进行验证。具体步骤可以参考 5.1.1

另外,如果我们希望能够在一个文件中管理除了 server 之外的其他 Hugegraph 相关的实例,我们也可以使用 docker-compose完成部署,使用命令 docker-compose up -d,(当然只配置 server 也是可以的)以下是一个样例的 docker-compose.yml:

version: '3'
 services:
   graph:
     image: hugegraph/hugegraph
-    #environment:
+    # environment:
     #  - PRELOAD=true
+    # PRELOAD 为可选参数,为 True 时可以在启动的时候预加载一个内置的样例图
     ports:
-      - 18080:8080
+      - 8080:8080
 

3.2 下载 tar 包

# use the latest version, here is 1.0.0 for example
 wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz
 tar zxf *hugegraph*.tar.gz
@@ -47,23 +48,25 @@
 # enter the tool's package
 cd *hugegraph*/*tool* 
 

注:${version} 为版本号,最新版本号可参考 Download 页面,或直接从 Download 页面点击链接下载

HugeGraph-Tools 的总入口脚本是 bin/hugegraph,用户可以使用 help 子命令查看其用法,这里只介绍一键部署的命令。

bin/hugegraph deploy -v {hugegraph-version} -p {install-path} [-u {download-path-prefix}]
-

{hugegraph-version} 表示要部署的 HugeGraphServer 及 HugeGraphStudio 的版本,用户可查看 conf/version-mapping.yaml 文件获取版本信息,{install-path} 指定 HugeGraphServer 及 HugeGraphStudio 的安装目录,{download-path-prefix} 可选,指定 HugeGraphServer 及 HugeGraphStudio tar 包的下载地址,不提供时使用默认下载地址,比如要启动 0.6 版本的 HugeGraph-Server 及 HugeGraphStudio 将上述命令写为 bin/hugegraph deploy -v 0.6 -p services 即可。

4 配置

如果需要快速启动 HugeGraph 仅用于测试,那么只需要进行少数几个配置项的修改即可(见下一节)。

详细的配置介绍请参考配置文档配置项介绍

5 启动

5.1 使用 Docker

3.1 使用 Docker 容器中,我们已经介绍了 如何使用 docker 部署 hugegraph-server, 我们还可以设置参数在 sever 启动的时候加载样例图

5.1.1 启动 server 的时候创建示例图

在 docker 启动的时候设置环境变量 PRELOAD=true, 从而实现启动脚本的时候加载数据。

  1. 使用docker run

    使用 docker run -itd --name=graph -p 18080:8080 -e PRELOAD=true hugegraph/hugegraph:latest

  2. 使用docker-compose

    创建docker-compose.yml,具体文件如下

    version: '3'
    +

    {hugegraph-version} 表示要部署的 HugeGraphServer 及 HugeGraphStudio 的版本,用户可查看 conf/version-mapping.yaml 文件获取版本信息,{install-path} 指定 HugeGraphServer 及 HugeGraphStudio 的安装目录,{download-path-prefix} 可选,指定 HugeGraphServer 及 HugeGraphStudio tar 包的下载地址,不提供时使用默认下载地址,比如要启动 0.6 版本的 HugeGraph-Server 及 HugeGraphStudio 将上述命令写为 bin/hugegraph deploy -v 0.6 -p services 即可。

    4 配置

    如果需要快速启动 HugeGraph 仅用于测试,那么只需要进行少数几个配置项的修改即可(见下一节)。

    详细的配置介绍请参考配置文档配置项介绍

    5 启动

    5.1 使用 Docker

    3.1 使用 Docker 容器中,我们已经介绍了如何使用 docker 部署 hugegraph-server, 我们还可以设置参数在 sever 启动的时候加载样例图

    5.1.1 启动 server 的时候创建示例图

    在 docker 启动的时候设置环境变量 PRELOAD=true, 从而实现启动脚本的时候加载数据。

    1. 使用docker run

      使用 docker run -itd --name=graph -p 8080:8080 -e PRELOAD=true hugegraph/hugegraph:latest

    2. 使用docker-compose

      创建docker-compose.yml,具体文件如下,在环境变量中设置 PRELOAD=true。其中,example.groovy 是一个预定义的脚本,用于预加载样例数据。如果有需要,可以通过挂载新的 example.groovy 脚本改变预加载的数据。

      version: '3'
         services:
           graph:
             image: hugegraph/hugegraph:latest
             container_name: graph
             environment:
               - PRELOAD=true
      +      volumes:
      +        - /path/to/yourscript:/hugegraph/scripts/example.groovy
             ports:
      -        - 18080:8080
      -

      使用命令 docker-compose up -d 启动容器

    使用 RESTful API 请求 HugeGraphServer 得到如下结果:

    > curl "http://localhost:18080/graphs/hugegraph/graph/vertices" | gunzip
    +        - 8080:8080
    +

    使用命令 docker-compose up -d 启动容器

使用 RESTful API 请求 HugeGraphServer 得到如下结果:

> curl "http://localhost:8080/graphs/hugegraph/graph/vertices" | gunzip
 
 {"vertices":[{"id":"2:lop","label":"software","type":"vertex","properties":{"name":"lop","lang":"java","price":328}},{"id":"1:josh","label":"person","type":"vertex","properties":{"name":"josh","age":32,"city":"Beijing"}},{"id":"1:marko","label":"person","type":"vertex","properties":{"name":"marko","age":29,"city":"Beijing"}},{"id":"1:peter","label":"person","type":"vertex","properties":{"name":"peter","age":35,"city":"Shanghai"}},{"id":"1:vadas","label":"person","type":"vertex","properties":{"name":"vadas","age":27,"city":"Hongkong"}},{"id":"2:ripple","label":"software","type":"vertex","properties":{"name":"ripple","lang":"java","price":199}}]}
 

代表创建示例图成功。

5.2 使用启动脚本启动

启动分为"首次启动"和"非首次启动",这么区分是因为在第一次启动前需要初始化后端数据库,然后启动服务。

而在人为停掉服务后,或者其他原因需要再次启动服务时,因为后端数据库是持久化存在的,直接启动服务即可。

HugeGraphServer 启动时会连接后端存储并尝试检查后端存储版本号,如果未初始化后端或者后端已初始化但版本不匹配时(旧版本数据),HugeGraphServer 会启动失败,并给出错误信息。

如果需要外部访问 HugeGraphServer,请修改 rest-server.propertiesrestserver.url 配置项(默认为 http://127.0.0.1:8080),修改成机器名或 IP 地址。

由于各种后端所需的配置(hugegraph.properties)及启动步骤略有不同,下面逐一对各后端的配置及启动做介绍。

5.2.1 RocksDB
点击展开/折叠 RocksDB 配置及启动方法

RocksDB 是一个嵌入式的数据库,不需要手动安装部署,要求 GCC 版本 >= 4.3.0(GLIBCXX_3.4.10),如不满足,需要提前升级 GCC

修改 hugegraph.properties

backend=rocksdb
 serializer=binary
 rocksdb.data_path=.
 rocksdb.wal_path=.
-

初始化数据库(仅第一次启动时需要)

cd *hugegraph-${version}
+

初始化数据库(第一次启动时或在 conf/graphs/ 下手动添加了新配置时需要进行初始化)

cd *hugegraph-${version}
 bin/init-store.sh
 

启动 server

bin/start-hugegraph.sh
 Starting HugeGraphServer...
@@ -79,7 +82,7 @@
 #hbase.enable_partition=true
 #hbase.vertex_partitions=10
 #hbase.edge_partitions=30
-

初始化数据库(仅第一次启动时需要)

cd *hugegraph-${version}
+

初始化数据库(第一次启动时或在 conf/graphs/ 下手动添加了新配置时需要进行初始化)

cd *hugegraph-${version}
 bin/init-store.sh
 

启动 server

bin/start-hugegraph.sh
 Starting HugeGraphServer...
@@ -97,7 +100,7 @@
 jdbc.reconnect_max_times=3
 jdbc.reconnect_interval=3
 jdbc.ssl_mode=false
-

初始化数据库(仅第一次启动时需要)

cd *hugegraph-${version}
+

初始化数据库(第一次启动时或在 conf/graphs/ 下手动添加了新配置时需要进行初始化)

cd *hugegraph-${version}
 bin/init-store.sh
 

启动 server

bin/start-hugegraph.sh
 Starting HugeGraphServer...
@@ -115,7 +118,7 @@
 
 #cassandra.keyspace.strategy=SimpleStrategy
 #cassandra.keyspace.replication=3
-

初始化数据库(仅第一次启动时需要)

cd *hugegraph-${version}
+

初始化数据库(第一次启动时或在 conf/graphs/ 下手动添加了新配置时需要进行初始化)

cd *hugegraph-${version}
 bin/init-store.sh
 Initing HugeGraph Store...
 2017-12-01 11:26:51 1424  [main] [INFO ] org.apache.hugegraph.HugeGraph [] - Opening backend store: 'cassandra'
@@ -155,7 +158,7 @@
 
 #cassandra.keyspace.strategy=SimpleStrategy
 #cassandra.keyspace.replication=3
-

由于 scylladb 数据库本身就是基于 cassandra 的"优化版",如果用户未安装 scylladb,也可以直接使用 cassandra 作为后端存储,只需要把 backend 和 serializer 修改为 scylladb,host 和 post 指向 cassandra 集群的 seeds 和 port 即可,但是并不建议这样做,这样发挥不出 scylladb 本身的优势了。

初始化数据库(仅第一次启动时需要)

cd *hugegraph-${version}
+

由于 scylladb 数据库本身就是基于 cassandra 的"优化版",如果用户未安装 scylladb,也可以直接使用 cassandra 作为后端存储,只需要把 backend 和 serializer 修改为 scylladb,host 和 post 指向 cassandra 集群的 seeds 和 port 即可,但是并不建议这样做,这样发挥不出 scylladb 本身的优势了。

初始化数据库(第一次启动时或在 conf/graphs/ 下手动添加了新配置时需要进行初始化)

cd *hugegraph-${version}
 bin/init-store.sh
 

启动 server

bin/start-hugegraph.sh
 Starting HugeGraphServer...
@@ -223,7 +226,7 @@
         ...
     ]
 }
-

详细的 API 请参考 RESTful-API 文档

7 停止 Server

$cd *hugegraph-${version}
+

详细的 API 请参考 RESTful-API 文档。

另外也可以通过访问 localhost:8080/swagger-ui/index.html 查看 API。

image

7 停止 Server

$cd *hugegraph-${version}
 $bin/stop-hugegraph.sh
 

8 使用 IntelliJ IDEA 调试 Server

请参考在 IDEA 中配置 Server 开发环境

3.2 - HugeGraph-Loader Quick Start

1 HugeGraph-Loader 概述

HugeGraph-Loader 是 HugeGraph 的数据导入组件,能够将多种数据源的数据转化为图的顶点和边并批量导入到图数据库中。

目前支持的数据源包括:

本地磁盘文件和 HDFS 文件支持断点续传。

后面会具体说明。

注意:使用 HugeGraph-Loader 需要依赖 HugeGraph Server 服务,下载和启动 Server 请参考 HugeGraph-Server Quick Start

2 获取 HugeGraph-Loader

有两种方式可以获取 HugeGraph-Loader:

2.1 下载已编译的压缩包

下载最新版本的 HugeGraph-Toolchain Release 包,里面包含了 loader + tool + hubble 全套工具,如果你已经下载,可跳过重复步骤

wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
 tar zxf *hugegraph*.tar.gz
@@ -701,13 +704,13 @@
 --deploy-mode cluster --name spark-hugegraph-loader --file ./hugegraph.json \
 --username admin --token admin --host xx.xx.xx.xx --port 8093 \
 --graph graph-test --num-executors 6 --executor-cores 16 --executor-memory 15g
-

3.3 - HugeGraph-Hubble Quick Start

1 HugeGraph-Hubble 概述

HugeGraph 是一款面向分析型,支持批量操作的图数据库系统,它由百度安全团队自主研发,全面支持Apache TinkerPop3框架和Gremlin图查询语言,提供导出、备份、恢复等完善的工具链生态,有效解决海量图数据的存储、查询和关联分析需求。HugeGraph 广泛应用于银行券商的风控打击、保险理赔、推荐搜索、公安犯罪打击、知识图谱构建、网络安全、IT 运维等领域,致力于让更多行业、组织及用户享受到更广泛的数据综合价值。

HugeGraph-Hubble 是 HugeGraph 的一站式可视化分析平台,平台涵盖了从数据建模,到数据快速导入,再到数据的在线、离线分析、以及图的统一管理的全过程,实现了图应用的全流程向导式操作,旨在提升用户的使用流畅度,降低用户的使用门槛,提供更为高效易用的使用体验。

平台主要包括以下模块:

图管理

图管理模块通过图的创建,连接平台与图数据,实现多图的统一管理,并实现图的访问、编辑、删除、查询操作。

元数据建模

元数据建模模块通过创建属性库,顶点类型,边类型,索引类型,实现图模型的构建与管理,平台提供两种模式,列表模式和图模式,可实时展示元数据模型,更加直观。同时还提供了跨图的元数据复用功能,省去相同元数据繁琐的重复创建过程,极大地提升建模效率,增强易用性。

数据导入

数据导入是将用户的业务数据转化为图的顶点和边并插入图数据库中,平台提供了向导式的可视化导入模块,通过创建导入任务,实现导入任务的管理及多个导入任务的并行运行,提高导入效能。进入导入任务后,只需跟随平台步骤提示,按需上传文件,填写内容,就可轻松实现图数据的导入过程,同时支持断点续传,错误重试机制等,降低导入成本,提升效率。

图分析

通过输入图遍历语言 Gremlin 可实现图数据的高性能通用分析,并提供顶点的定制化多维路径查询等功能,提供 3 种图结果展示方式,包括:图形式、表格形式、Json 形式,多维度展示数据形态,满足用户使用的多种场景需求。提供运行记录及常用语句收藏等功能,实现图操作的可追溯,以及查询输入的复用共享,快捷高效。支持图数据的导出,导出格式为 Json 格式。

任务管理

对于需要遍历全图的 Gremlin 任务,索引的创建与重建等耗时较长的异步任务,平台提供相应的任务管理功能,实现异步任务的统一的管理与结果查看。

2 部署

有三种方式可以部署hugegraph-hubble

2.1 使用 Docker (推荐)

特别注意: 如果使用 docker 启动 hubble,且 hubble 和 server 位于同一宿主机,在后续 hubble 页面中设置 graph 的 hostname 的时候请不要直接设置 localhost/127.0.0.1,这将指向 hubble 容器内部而非宿主机,导致无法连接到 server. 如果 hubble 和 server 在同一 docker 网络下,则可以直接使用container_name作为主机名,端口则为 8080. 或者也可以使用宿主机 ip 作为主机名,此时端口号为宿主机为 server 配置的端口

我们可以使用 docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble 快速启动 hubble.

或者使用 docker-compose 启动 hubble,另外如果 hubble 和 graph 在同一个 docker 网络下,可以使用 graph 的 contain_name 进行访问,而不需要宿主机的 ip

使用docker-compose up -ddocker-compose.yml如下:

version: '3'
+

3.3 - HugeGraph-Hubble Quick Start

1 HugeGraph-Hubble 概述

HugeGraph 是一款面向分析型,支持批量操作的图数据库系统,它由百度安全团队自主研发,全面支持Apache TinkerPop3框架和Gremlin图查询语言,提供导出、备份、恢复等完善的工具链生态,有效解决海量图数据的存储、查询和关联分析需求。HugeGraph 广泛应用于银行券商的风控打击、保险理赔、推荐搜索、公安犯罪打击、知识图谱构建、网络安全、IT 运维等领域,致力于让更多行业、组织及用户享受到更广泛的数据综合价值。

HugeGraph-Hubble 是 HugeGraph 的一站式可视化分析平台,平台涵盖了从数据建模,到数据快速导入,再到数据的在线、离线分析、以及图的统一管理的全过程,实现了图应用的全流程向导式操作,旨在提升用户的使用流畅度,降低用户的使用门槛,提供更为高效易用的使用体验。

平台主要包括以下模块:

图管理

图管理模块通过图的创建,连接平台与图数据,实现多图的统一管理,并实现图的访问、编辑、删除、查询操作。

元数据建模

元数据建模模块通过创建属性库,顶点类型,边类型,索引类型,实现图模型的构建与管理,平台提供两种模式,列表模式和图模式,可实时展示元数据模型,更加直观。同时还提供了跨图的元数据复用功能,省去相同元数据繁琐的重复创建过程,极大地提升建模效率,增强易用性。

数据导入

数据导入是将用户的业务数据转化为图的顶点和边并插入图数据库中,平台提供了向导式的可视化导入模块,通过创建导入任务,实现导入任务的管理及多个导入任务的并行运行,提高导入效能。进入导入任务后,只需跟随平台步骤提示,按需上传文件,填写内容,就可轻松实现图数据的导入过程,同时支持断点续传,错误重试机制等,降低导入成本,提升效率。

图分析

通过输入图遍历语言 Gremlin 可实现图数据的高性能通用分析,并提供顶点的定制化多维路径查询等功能,提供 3 种图结果展示方式,包括:图形式、表格形式、Json 形式,多维度展示数据形态,满足用户使用的多种场景需求。提供运行记录及常用语句收藏等功能,实现图操作的可追溯,以及查询输入的复用共享,快捷高效。支持图数据的导出,导出格式为 Json 格式。

任务管理

对于需要遍历全图的 Gremlin 任务,索引的创建与重建等耗时较长的异步任务,平台提供相应的任务管理功能,实现异步任务的统一的管理与结果查看。

2 部署

有三种方式可以部署hugegraph-hubble

2.1 使用 Docker (推荐)

特别注意: docker 模式下,若 hubble 和 server 在同一宿主机,hubble 页面中设置 graph 的 hostname 不能设置localhost/127.0.0.1,因这会指向 hubble 容器内部而非宿主机,导致无法连接到 server.

若 hubble 和 server 在同一 docker 网络下,推荐直接使用container_name (如下例的 graph) 作为主机名。或者也可以使用 宿主机 IP 作为主机名,此时端口号为宿主机给 server 配置的端口

我们可以使用 docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble 快速启动 hubble.

或者使用 docker-compose 启动 hubble,另外如果 hubble 和 graph 在同一个 docker 网络下,可以使用 graph 的 contain_name 进行访问,而不需要宿主机的 ip

使用docker-compose up -ddocker-compose.yml如下:

version: '3'
 services:
   server:
     image: hugegraph/hugegraph
     container_name: graph
     ports:
-      - 18080:8080
+      - 8080:8080
 
   hubble:
     image: hugegraph/hubble
@@ -738,7 +741,7 @@
 mvn -e compile package -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -ntp
 cd apache-hugegraph-hubble-incubating*
 

启动hubble

bin/start-hubble.sh -d
-

3 平台使用流程

平台的模块使用流程如下:

image

4 平台使用说明

4.1 图管理

4.1.1 图创建

图管理模块下,点击【创建图】,通过填写图 ID、图名称、主机名、端口号、用户名、密码的信息,实现多图的连接。

image

创建图填写内容如下:

image
4.1.2 图访问

实现图空间的信息访问,进入后,可进行图的多维查询分析、元数据管理、数据导入、算法分析等操作。

image
4.1.3 图管理
  1. 用户通过对图的概览、搜索以及单图的信息编辑与删除,实现图的统一管理。
  2. 搜索范围:可对图名称和 ID 进行搜索。
image

4.2 元数据建模(列表 + 图模式)

4.2.1 模块入口

左侧导航处:

image
4.2.2 属性类型
4.2.2.1 创建
  1. 填写或选择属性名称、数据类型、基数,完成属性的创建。
  2. 创建的属性可作为顶点类型和边类型的属性。

列表模式:

image

图模式:

image
4.2.2.2 复用
  1. 平台提供【复用】功能,可直接复用其他图的元数据。
  2. 选择需要复用的图 ID,继续选择需要复用的属性,之后平台会进行是否冲突的校验,通过后,可实现元数据的复用。

选择复用项:

image

校验复用项:

image
4.2.2.3 管理
  1. 在属性列表中可进行单条删除或批量删除操作。
4.2.3 顶点类型
4.2.3.1 创建
  1. 填写或选择顶点类型名称、ID 策略、关联属性、主键属性,顶点样式、查询结果中顶点下方展示的内容,以及索引的信息:包括是否创建类型索引,及属性索引的具体内容,完成顶点类型的创建。

列表模式:

image

图模式:

image
4.2.3.2 复用
  1. 顶点类型的复用,会将此类型关联的属性和属性索引一并复用。
  2. 复用功能使用方法类似属性的复用,见 3.2.2.2。
4.2.3.3 管理
  1. 可进行编辑操作,顶点样式、关联类型、顶点展示内容、属性索引可编辑,其余不可编辑。

  2. 可进行单条删除或批量删除操作。

image
4.2.4 边类型
4.2.4.1 创建
  1. 填写或选择边类型名称、起点类型、终点类型、关联属性、是否允许多次连接、边样式、查询结果中边下方展示的内容,以及索引的信息:包括是否创建类型索引,及属性索引的具体内容,完成边类型的创建。

列表模式:

image

图模式:

image
4.2.4.2 复用
  1. 边类型的复用,会将此类型的起点类型、终点类型、关联的属性和属性索引一并复用。
  2. 复用功能使用方法类似属性的复用,见 3.2.2.2。
4.2.4.3 管理
  1. 可进行编辑操作,边样式、关联属性、边展示内容、属性索引可编辑,其余不可编辑,同顶点类型。
  2. 可进行单条删除或批量删除操作。
4.2.5 索引类型

展示顶点类型和边类型的顶点索引和边索引。

4.3 数据导入

数据导入的使用流程如下:

image
4.3.1 模块入口

左侧导航处:

image
4.3.2 创建任务
  1. 填写任务名称和备注(非必填),可以创建导入任务。
  2. 可创建多个导入任务,并行导入。
image
4.3.3 上传文件
  1. 上传需要构图的文件,目前支持的格式为 CSV,后续会不断更新。
  2. 可同时上传多个文件。
image
4.3.4 设置数据映射
  1. 对上传的文件分别设置数据映射,包括文件设置和类型设置

  2. 文件设置:勾选或填写是否包含表头、分隔符、编码格式等文件本身的设置内容,均设置默认值,无需手动填写

  3. 类型设置:

    1. 顶点映射和边映射:

      【顶点类型】 :选择顶点类型,并为其 ID 映射上传文件中列数据;

      【边类型】:选择边类型,为其起点类型和终点类型的 ID 列映射上传文件的列数据;

    2. 映射设置:为选定的顶点类型的属性映射上传文件中的列数据,此处,若属性名称与文件的表头名称一致,可自动匹配映射属性,无需手动填选

    3. 完成设置后,显示设置列表,方可进行下一步操作,支持映射的新增、编辑、删除操作

设置映射的填写内容:

image

映射列表:

image
4.3.5 导入数据

导入前需要填写导入设置参数,填写完成后,可开始向图库中导入数据

  1. 导入设置
image
  1. 导入详情
image

4.4 数据分析

4.4.1 模块入口

左侧导航处:

image
4.4.2 多图切换

通过左侧切换入口,灵活切换多图的操作空间

image
4.4.3 图分析与处理

HugeGraph 支持 Apache TinkerPop3 的图遍历查询语言 Gremlin,Gremlin 是一种通用的图数据库查询语言,通过输入 Gremlin 语句,点击执行,即可执行图数据的查询分析操作,并可实现顶点/边的创建及删除、顶点/边的属性修改等。

Gremlin 查询后,下方为图结果展示区域,提供 3 种图结果展示方式,分别为:【图模式】、【表格模式】、【Json 模式】。

支持缩放、居中、全屏、导出等操作。

【图模式】

image

【表格模式】

image

【Json 模式】

image
4.4.4 数据详情

点击顶点/边实体,可查看顶点/边的数据详情,包括:顶点/边类型,顶点 ID,属性及对应值,拓展图的信息展示维度,提高易用性。

4.4.5 图结果的多维路径查询

除了全局的查询外,可针对查询结果中的顶点进行深度定制化查询以及隐藏操作,实现图结果的定制化挖掘。

右击顶点,出现顶点的菜单入口,可进行展示、查询、隐藏等操作。

双击顶点,也可展示与选中点关联的顶点。

image
4.4.6 新增顶点/边
4.4.6.1 新增顶点

在图区可通过两个入口,动态新增顶点,如下:

  1. 点击图区面板,出现添加顶点入口
  2. 点击右上角的操作栏中的首个图标

通过选择或填写顶点类型、ID 值、属性信息,完成顶点的增加。

入口如下:

image

添加顶点内容如下:

image
4.4.6.2 新增边

右击图结果中的顶点,可增加该点的出边或者入边。

4.4.7 执行记录与收藏的查询
  1. 图区下方记载每次查询记录,包括:查询时间、执行类型、内容、状态、耗时、以及【收藏】和【加载】操作,实现图执行的全方位记录,有迹可循,并可对执行内容快速加载复用
  2. 提供语句的收藏功能,可对常用语句进行收藏操作,方便高频语句快速调用
image

4.5 任务管理

4.5.1 模块入口

左侧导航处:

image
4.5.2 任务管理
  1. 提供异步任务的统一的管理与结果查看,异步任务包括 4 类,分别为:
  1. 列表显示当前图的异步任务信息,包括:任务 ID,任务名称,任务类型,创建时间,耗时,状态,操作,实现对异步任务的管理。
  2. 支持对任务类型和状态进行筛选
  3. 支持搜索任务 ID 和任务名称
  4. 可对异步任务进行删除或批量删除操作
image
4.5.3 Gremlin 异步任务

1.创建任务

3 平台使用流程

平台的模块使用流程如下:

image

4 平台使用说明

4.1 图管理

4.1.1 图创建

图管理模块下,点击【创建图】,通过填写图 ID、图名称、主机名、端口号、用户名、密码的信息,实现多图的连接。

image

创建图填写内容如下:

image

注意:如果使用 docker 启动 hubble,且 serverhubble 位于同一宿主机,不能直接使用 localhost/127.0.0.1 作为主机名。如果 hubbleserver 在同一 docker 网络下,则可以直接使用 container_name 作为主机名,端口则为 8080。或者也可以使用宿主机 ip 作为主机名,此时端口为宿主机为 server 配置的端口

4.1.2 图访问

实现图空间的信息访问,进入后,可进行图的多维查询分析、元数据管理、数据导入、算法分析等操作。

image
4.1.3 图管理
  1. 用户通过对图的概览、搜索以及单图的信息编辑与删除,实现图的统一管理。
  2. 搜索范围:可对图名称和 ID 进行搜索。
image

4.2 元数据建模(列表 + 图模式)

4.2.1 模块入口

左侧导航处:

image
4.2.2 属性类型
4.2.2.1 创建
  1. 填写或选择属性名称、数据类型、基数,完成属性的创建。
  2. 创建的属性可作为顶点类型和边类型的属性。

列表模式:

image

图模式:

image
4.2.2.2 复用
  1. 平台提供【复用】功能,可直接复用其他图的元数据。
  2. 选择需要复用的图 ID,继续选择需要复用的属性,之后平台会进行是否冲突的校验,通过后,可实现元数据的复用。

选择复用项:

image

校验复用项:

image
4.2.2.3 管理
  1. 在属性列表中可进行单条删除或批量删除操作。
4.2.3 顶点类型
4.2.3.1 创建
  1. 填写或选择顶点类型名称、ID 策略、关联属性、主键属性,顶点样式、查询结果中顶点下方展示的内容,以及索引的信息:包括是否创建类型索引,及属性索引的具体内容,完成顶点类型的创建。

列表模式:

image

图模式:

image
4.2.3.2 复用
  1. 顶点类型的复用,会将此类型关联的属性和属性索引一并复用。
  2. 复用功能使用方法类似属性的复用,见 3.2.2.2。
4.2.3.3 管理
  1. 可进行编辑操作,顶点样式、关联类型、顶点展示内容、属性索引可编辑,其余不可编辑。

  2. 可进行单条删除或批量删除操作。

image
4.2.4 边类型
4.2.4.1 创建
  1. 填写或选择边类型名称、起点类型、终点类型、关联属性、是否允许多次连接、边样式、查询结果中边下方展示的内容,以及索引的信息:包括是否创建类型索引,及属性索引的具体内容,完成边类型的创建。

列表模式:

image

图模式:

image
4.2.4.2 复用
  1. 边类型的复用,会将此类型的起点类型、终点类型、关联的属性和属性索引一并复用。
  2. 复用功能使用方法类似属性的复用,见 3.2.2.2。
4.2.4.3 管理
  1. 可进行编辑操作,边样式、关联属性、边展示内容、属性索引可编辑,其余不可编辑,同顶点类型。
  2. 可进行单条删除或批量删除操作。
4.2.5 索引类型

展示顶点类型和边类型的顶点索引和边索引。

4.3 数据导入

注意:目前推荐使用 hugegraph-loader 进行正式数据导入, hubble 内置的导入用来做测试简单上手

数据导入的使用流程如下:

image
4.3.1 模块入口

左侧导航处:

image
4.3.2 创建任务
  1. 填写任务名称和备注(非必填),可以创建导入任务。
  2. 可创建多个导入任务,并行导入。
image
4.3.3 上传文件
  1. 上传需要构图的文件,目前支持的格式为 CSV,后续会不断更新。
  2. 可同时上传多个文件。
image
4.3.4 设置数据映射
  1. 对上传的文件分别设置数据映射,包括文件设置和类型设置

  2. 文件设置:勾选或填写是否包含表头、分隔符、编码格式等文件本身的设置内容,均设置默认值,无需手动填写

  3. 类型设置:

    1. 顶点映射和边映射:

      【顶点类型】 :选择顶点类型,并为其 ID 映射上传文件中列数据;

      【边类型】:选择边类型,为其起点类型和终点类型的 ID 列映射上传文件的列数据;

    2. 映射设置:为选定的顶点类型的属性映射上传文件中的列数据,此处,若属性名称与文件的表头名称一致,可自动匹配映射属性,无需手动填选

    3. 完成设置后,显示设置列表,方可进行下一步操作,支持映射的新增、编辑、删除操作

设置映射的填写内容:

image

映射列表:

image
4.3.5 导入数据

导入前需要填写导入设置参数,填写完成后,可开始向图库中导入数据

  1. 导入设置
image
  1. 导入详情
image

4.4 数据分析

4.4.1 模块入口

左侧导航处:

image
4.4.2 多图切换

通过左侧切换入口,灵活切换多图的操作空间

image
4.4.3 图分析与处理

HugeGraph 支持 Apache TinkerPop3 的图遍历查询语言 Gremlin,Gremlin 是一种通用的图数据库查询语言,通过输入 Gremlin 语句,点击执行,即可执行图数据的查询分析操作,并可实现顶点/边的创建及删除、顶点/边的属性修改等。

Gremlin 查询后,下方为图结果展示区域,提供 3 种图结果展示方式,分别为:【图模式】、【表格模式】、【Json 模式】。

支持缩放、居中、全屏、导出等操作。

【图模式】

image

【表格模式】

image

【Json 模式】

image
4.4.4 数据详情

点击顶点/边实体,可查看顶点/边的数据详情,包括:顶点/边类型,顶点 ID,属性及对应值,拓展图的信息展示维度,提高易用性。

4.4.5 图结果的多维路径查询

除了全局的查询外,可针对查询结果中的顶点进行深度定制化查询以及隐藏操作,实现图结果的定制化挖掘。

右击顶点,出现顶点的菜单入口,可进行展示、查询、隐藏等操作。

双击顶点,也可展示与选中点关联的顶点。

image
4.4.6 新增顶点/边
4.4.6.1 新增顶点

在图区可通过两个入口,动态新增顶点,如下:

  1. 点击图区面板,出现添加顶点入口
  2. 点击右上角的操作栏中的首个图标

通过选择或填写顶点类型、ID 值、属性信息,完成顶点的增加。

入口如下:

image

添加顶点内容如下:

image
4.4.6.2 新增边

右击图结果中的顶点,可增加该点的出边或者入边。

4.4.7 执行记录与收藏的查询
  1. 图区下方记载每次查询记录,包括:查询时间、执行类型、内容、状态、耗时、以及【收藏】和【加载】操作,实现图执行的全方位记录,有迹可循,并可对执行内容快速加载复用
  2. 提供语句的收藏功能,可对常用语句进行收藏操作,方便高频语句快速调用
image

4.5 任务管理

4.5.1 模块入口

左侧导航处:

image
4.5.2 任务管理
  1. 提供异步任务的统一的管理与结果查看,异步任务包括 4 类,分别为:
  1. 列表显示当前图的异步任务信息,包括:任务 ID,任务名称,任务类型,创建时间,耗时,状态,操作,实现对异步任务的管理。
  2. 支持对任务类型和状态进行筛选
  3. 支持搜索任务 ID 和任务名称
  4. 可对异步任务进行删除或批量删除操作
image
4.5.3 Gremlin 异步任务

1.创建任务

image

点击查看入口,跳转到任务管理列表,如下:

image

4.查看结果

4.5.4 OLAP 算法任务

Hubble 上暂未提供可视化的 OLAP 算法执行,可调用 RESTful API 进行 OLAP 类算法任务,在任务管理中通过 ID 找到相应任务,查看进度与结果等。

4.5.5 删除元数据、重建索引

1.创建任务

image
image

2.任务详情

image

3.4 - HugeGraph-Client Quick Start

1 HugeGraph-Client 概述

HugeGraph-Client 向 HugeGraph-Server 发出 HTTP 请求,获取并解析 Server 的执行结果。目前仅提供了 Java 版,用户可以使用 HugeGraph-Client 编写 Java 代码操作 HugeGraph,比如元数据和图数据的增删改查,或者执行 gremlin 语句。

2 环境要求

3 使用流程

使用 HugeGraph-Client 的基本步骤如下:

详细使用过程见下节完整示例。

4 完整示例

4.1 新建 Maven 工程

可以选择 Eclipse 或者 Intellij Idea 创建工程:

4.2 添加 hugegraph-client 依赖

添加 hugegraph-client 依赖


 <dependencies>
@@ -1694,7 +1697,7 @@
 
$ ./bin/start-hugegraph.sh
 
 Starting HugeGraphServer...
-Connecting to HugeGraphServer (http://127.0.0.1:18080/graphs)...OK
+Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)...OK
 Started [pid 21614]
 

查看创建的图:

curl http://127.0.0.1:8080/graphs/
 
@@ -1770,7 +1773,7 @@
 
  1. 根据服务端私钥,导出服务端证书
keytool -export -alias serverkey -keystore server.keystore -file server.crt
 

server.crt 就是服务端的证书

客户端

keytool -import -alias serverkey -file server.crt -keystore client.truststore
 

client.truststore 是给客户端⽤的,其中保存着受信任的证书

4.5 - HugeGraph-Computer 配置

Computer Config Options

config optiondefault valuedescription
algorithm.message_classorg.apache.hugegraph.computer.core.config.NullThe class of message passed when compute vertex.
algorithm.params_classorg.apache.hugegraph.computer.core.config.NullThe class used to transfer algorithms’ parameters before algorithm been run.
algorithm.result_classorg.apache.hugegraph.computer.core.config.NullThe class of vertex’s value, the instance is used to store computation result for the vertex.
allocator.max_vertices_per_thread10000Maximum number of vertices per thread processed in each memory allocator
bsp.etcd_endpointshttp://localhost:2379The end points to access etcd.
bsp.log_interval30000The log interval(in ms) to print the log while waiting bsp event.
bsp.max_super_step10The max super step of the algorithm.
bsp.register_timeout300000The max timeout to wait for master and works to register.
bsp.wait_master_timeout86400000The max timeout(in ms) to wait for master bsp event.
bsp.wait_workers_timeout86400000The max timeout to wait for workers bsp event.
hgkv.max_data_block_size65536The max byte size of hgkv-file data block.
hgkv.max_file_size2147483648The max number of bytes in each hgkv-file.
hgkv.max_merge_files10The max number of files to merge at one time.
hgkv.temp_file_dir/tmp/hgkvThis folder is used to store temporary files, temporary files will be generated during the file merging process.
hugegraph.namehugegraphThe graph name to load data and write results back.
hugegraph.urlhttp://127.0.0.1:8080The hugegraph url to load data and write results back.
input.edge_directionOUTThe data of the edge in which direction is loaded, when the value is BOTH, the edges in both OUT and IN direction will be loaded.
input.edge_freqMULTIPLEThe frequency of edges can exist between a pair of vertices, allowed values: [SINGLE, SINGLE_PER_LABEL, MULTIPLE]. SINGLE means that only one edge can exist between a pair of vertices, use sourceId + targetId to identify it; SINGLE_PER_LABEL means that each edge label can exist one edge between a pair of vertices, use sourceId + edgelabel + targetId to identify it; MULTIPLE means that many edge can exist between a pair of vertices, use sourceId + edgelabel + sortValues + targetId to identify it.
input.filter_classorg.apache.hugegraph.computer.core.input.filter.DefaultInputFilterThe class to create input-filter object, input-filter is used to Filter vertex edges according to user needs.
input.loader_schema_pathThe schema path of loader input, only takes effect when the input.source_type=loader is enabled
input.loader_struct_pathThe struct path of loader input, only takes effect when the input.source_type=loader is enabled
input.max_edges_in_one_vertex200The maximum number of adjacent edges allowed to be attached to a vertex, the adjacent edges will be stored and transferred together as a batch unit.
input.source_typehugegraph-serverThe source type to load input data, allowed values: [‘hugegraph-server’, ‘hugegraph-loader’], the ‘hugegraph-loader’ means use hugegraph-loader load data from HDFS or file, if use ‘hugegraph-loader’ load data then please config ‘input.loader_struct_path’ and ‘input.loader_schema_path’.
input.split_fetch_timeout300The timeout in seconds to fetch input splits
input.split_max_splits10000000The maximum number of input splits
input.split_page_size500The page size for streamed load input split data
input.split_size1048576The input split size in bytes
job.idlocal_0001The job id on Yarn cluster or K8s cluster.
job.partitions_count1The partitions count for computing one graph algorithm job.
job.partitions_thread_nums4The number of threads for partition parallel compute.
job.workers_count1The workers count for computing one graph algorithm job.
master.computation_classorg.apache.hugegraph.computer.core.master.DefaultMasterComputationMaster-computation is computation that can determine whether to continue next superstep. It runs at the end of each superstep on master.
output.batch_size500The batch size of output
output.batch_threads1The threads number used to batch output
output.hdfs_core_site_pathThe hdfs core site path.
output.hdfs_delimiter,The delimiter of hdfs output.
output.hdfs_kerberos_enablefalseIs Kerberos authentication enabled for Hdfs.
output.hdfs_kerberos_keytabThe Hdfs’s key tab file for kerberos authentication.
output.hdfs_kerberos_principalThe Hdfs’s principal for kerberos authentication.
output.hdfs_krb5_conf/etc/krb5.confKerberos configuration file.
output.hdfs_merge_partitionstrueWhether merge output files of multiple partitions.
output.hdfs_path_prefix/hugegraph-computer/resultsThe directory of hdfs output result.
output.hdfs_replication3The replication number of hdfs.
output.hdfs_site_pathThe hdfs site path.
output.hdfs_urlhdfs://127.0.0.1:9000The hdfs url of output.
output.hdfs_userhadoopThe hdfs user of output.
output.output_classorg.apache.hugegraph.computer.core.output.LogOutputThe class to output the computation result of each vertex. Be called after iteration computation.
output.result_namevalueThe value is assigned dynamically by #name() of instance created by WORKER_COMPUTATION_CLASS.
output.result_write_typeOLAP_COMMONThe result write-type to output to hugegraph, allowed values are: [OLAP_COMMON, OLAP_SECONDARY, OLAP_RANGE].
output.retry_interval10The retry interval when output failed
output.retry_times3The retry times when output failed
output.single_threads1The threads number used to single output
output.thread_pool_shutdown_timeout60The timeout seconds of output threads pool shutdown
output.with_adjacent_edgesfalseOutput the adjacent edges of the vertex or not
output.with_edge_propertiesfalseOutput the properties of the edge or not
output.with_vertex_propertiesfalseOutput the properties of the vertex or not
sort.thread_nums4The number of threads performing internal sorting.
transport.client_connect_timeout3000The timeout(in ms) of client connect to server.
transport.client_threads4The number of transport threads for client.
transport.close_timeout10000The timeout(in ms) of close server or close client.
transport.finish_session_timeout0The timeout(in ms) to finish session, 0 means using (transport.sync_request_timeout * transport.max_pending_requests).
transport.heartbeat_interval20000The minimum interval(in ms) between heartbeats on client side.
transport.io_modeAUTOThe network IO Mode, either ‘NIO’, ‘EPOLL’, ‘AUTO’, the ‘AUTO’ means selecting the property mode automatically.
transport.max_pending_requests8The max number of client unreceived ack, it will trigger the sending unavailable if the number of unreceived ack >= max_pending_requests.
transport.max_syn_backlog511The capacity of SYN queue on server side, 0 means using system default value.
transport.max_timeout_heartbeat_count120The maximum times of timeout heartbeat on client side, if the number of timeouts waiting for heartbeat response continuously > max_heartbeat_timeouts the channel will be closed from client side.
transport.min_ack_interval200The minimum interval(in ms) of server reply ack.
transport.min_pending_requests6The minimum number of client unreceived ack, it will trigger the sending available if the number of unreceived ack < min_pending_requests.
transport.network_retries3The number of retry attempts for network communication,if network unstable.
transport.provider_classorg.apache.hugegraph.computer.core.network.netty.NettyTransportProviderThe transport provider, currently only supports Netty.
transport.receive_buffer_size0The size of socket receive-buffer in bytes, 0 means using system default value.
transport.recv_file_modetrueWhether enable receive buffer-file mode, it will receive buffer write file from socket by zero-copy if enable.
transport.send_buffer_size0The size of socket send-buffer in bytes, 0 means using system default value.
transport.server_host127.0.0.1The server hostname or ip to listen on to transfer data.
transport.server_idle_timeout360000The max timeout(in ms) of server idle.
transport.server_port0The server port to listen on to transfer data. The system will assign a random port if it’s set to 0.
transport.server_threads4The number of transport threads for server.
transport.sync_request_timeout10000The timeout(in ms) to wait response after sending sync-request.
transport.tcp_keep_alivetrueWhether enable TCP keep-alive.
transport.transport_epoll_ltfalseWhether enable EPOLL level-trigger.
transport.write_buffer_high_mark67108864The high water mark for write buffer in bytes, it will trigger the sending unavailable if the number of queued bytes > write_buffer_high_mark.
transport.write_buffer_low_mark33554432The low water mark for write buffer in bytes, it will trigger the sending available if the number of queued bytes < write_buffer_low_mark.org.apache.hugegraph.config.OptionChecker$$Lambda$97/0x00000008001c8440@776a6d9b
transport.write_socket_timeout3000The timeout(in ms) to write data to socket buffer.
valuefile.max_segment_size1073741824The max number of bytes in each segment of value-file.
worker.combiner_classorg.apache.hugegraph.computer.core.config.NullCombiner can combine messages into one value for a vertex, for example page-rank algorithm can combine messages of a vertex to a sum value.
worker.computation_classorg.apache.hugegraph.computer.core.config.NullThe class to create worker-computation object, worker-computation is used to compute each vertex in each superstep.
worker.data_dirs[jobs]The directories separated by ‘,’ that received vertices and messages can persist into.
worker.edge_properties_combiner_classorg.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombinerThe combiner can combine several properties of the same edge into one properties at inputstep.
worker.partitionerorg.apache.hugegraph.computer.core.graph.partition.HashPartitionerThe partitioner that decides which partition a vertex should be in, and which worker a partition should be in.
worker.received_buffers_bytes_limit104857600The limit bytes of buffers of received data, the total size of all buffers can’t excess this limit. If received buffers reach this limit, they will be merged into a file.
worker.vertex_properties_combiner_classorg.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombinerThe combiner can combine several properties of the same vertex into one properties at inputstep.
worker.wait_finish_messages_timeout86400000The max timeout(in ms) message-handler wait for finish-message of all workers.
worker.wait_sort_timeout600000The max timeout(in ms) message-handler wait for sort-thread to sort one batch of buffers.
worker.write_buffer_capacity52428800The initial size of write buffer that used to store vertex or message.
worker.write_buffer_threshold52428800The threshold of write buffer, exceeding it will trigger sorting, the write buffer is used to store vertex or message.

K8s Operator Config Options

NOTE: Option needs to be converted through environment variable settings, e.g. k8s.internal_etcd_url => INTERNAL_ETCD_URL

config optiondefault valuedescription
k8s.auto_destroy_podtrueWhether to automatically destroy all pods when the job is completed or failed.
k8s.close_reconciler_timeout120The max timeout(in ms) to close reconciler.
k8s.internal_etcd_urlhttp://127.0.0.1:2379The internal etcd url for operator system.
k8s.max_reconcile_retry3The max retry times of reconcile.
k8s.probe_backlog50The maximum backlog for serving health probes.
k8s.probe_port9892The value is the port that the controller bind to for serving health probes.
k8s.ready_check_internal1000The time interval(ms) of check ready.
k8s.ready_timeout30000The max timeout(in ms) of check ready.
k8s.reconciler_count10The max number of reconciler thread.
k8s.resync_period600000The minimum frequency at which watched resources are reconciled.
k8s.timezoneAsia/ShanghaiThe timezone of computer job and operator.
k8s.watch_namespacehugegraph-computer-systemThe value is watch custom resources in the namespace, ignore other namespaces, the ‘*’ means is all namespaces will be watched.

HugeGraph-Computer CRD

CRD: https://github.com/apache/hugegraph-computer/blob/master/computer-k8s-operator/manifest/hugegraph-computer-crd.v1.yaml

specdefault valuedescriptionrequired
algorithmNameThe name of algorithm.true
jobIdThe job id.true
imageThe image of algorithm.true
computerConfThe map of computer config options.true
workerInstancesThe number of worker instances, it will instead the ‘job.workers_count’ option.true
pullPolicyAlwaysThe pull-policy of image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policyfalse
pullSecretsThe pull-secrets of Image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-podfalse
masterCpuThe cpu limit of master, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpufalse
workerCpuThe cpu limit of worker, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpufalse
masterMemoryThe memory limit of master, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memoryfalse
workerMemoryThe memory limit of worker, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memoryfalse
log4jXmlThe content of log4j.xml for computer job.false
jarFileThe jar path of computer algorithm.false
remoteJarUriThe remote jar uri of computer algorithm, it will overlay algorithm image.false
jvmOptionsThe java startup parameters of computer job.false
envVarsplease refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/false
envFromplease refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/false
masterCommandbin/start-computer.shThe run command of master, equivalent to ‘Entrypoint’ field of Docker.false
masterArgs["-r master", “-d k8s”]The run args of master, equivalent to ‘Cmd’ field of Docker.false
workerCommandbin/start-computer.shThe run command of worker, equivalent to ‘Entrypoint’ field of Docker.false
workerArgs["-r worker", “-d k8s”]The run args of worker, equivalent to ‘Cmd’ field of Docker.false
volumesPlease refer to: https://kubernetes.io/docs/concepts/storage/volumes/false
volumeMountsPlease refer to: https://kubernetes.io/docs/concepts/storage/volumes/false
secretPathsThe map of k8s-secret name and mount path.false
configMapPathsThe map of k8s-configmap name and mount path.false
podTemplateSpecPlease refer to: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-template-v1/#PodTemplateSpecfalse
securityContextPlease refer to: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/false

KubeDriver Config Options

config optiondefault valuedescription
k8s.build_image_bash_pathThe path of command used to build image.
k8s.enable_internal_algorithmtrueWhether enable internal algorithm.
k8s.framework_image_urlhugegraph/hugegraph-computer:latestThe image url of computer framework.
k8s.image_repository_passwordThe password for login image repository.
k8s.image_repository_registryThe address for login image repository.
k8s.image_repository_urlhugegraph/hugegraph-computerThe url of image repository.
k8s.image_repository_usernameThe username for login image repository.
k8s.internal_algorithm[pageRank]The name list of all internal algorithm.
k8s.internal_algorithm_image_urlhugegraph/hugegraph-computer:latestThe image url of internal algorithm.
k8s.jar_file_dir/cache/jars/The directory where the algorithm jar to upload location.
k8s.kube_config~/.kube/configThe path of k8s config file.
k8s.log4j_xml_pathThe log4j.xml path for computer job.
k8s.namespacehugegraph-computer-systemThe namespace of hugegraph-computer system.
k8s.pull_secret_names[]The names of pull-secret for pulling image.

5 - API

5.1 - HugeGraph RESTful API

HugeGraph-Server通过HugeGraph-API基于HTTP协议为Client提供操作图的接口,主要包括元数据和 -图数据的增删改查,遍历算法,变量,图操作及其他操作。

5.1.1 - Schema API

1.1 Schema

HugeGraph 提供单一接口获取某个图的全部 Schema 信息,包括:PropertyKey、VertexLabel、EdgeLabel 和 IndexLabel。

Method & Url
GET http://localhost:8080/graphs/{graph_name}/schema
+图数据的增删改查,遍历算法,变量,图操作及其他操作。

除了下方的文档,你还可以通过 localhost:8080/swagger-ui/index.html 访问 swagger-ui 以查看 RESTful API示例可以参考此处

5.1.1 - Schema API

1.1 Schema

HugeGraph 提供单一接口获取某个图的全部 Schema 信息,包括:PropertyKey、VertexLabel、EdgeLabel 和 IndexLabel。

Method & Url
GET http://localhost:8080/graphs/{graph_name}/schema
 
 e.g: GET http://localhost:8080/graphs/hugegraph/schema
 
Response Status
200
diff --git a/cn/docs/clients/_print/index.html b/cn/docs/clients/_print/index.html
index 6b4fb8c6e..ab14e25a7 100644
--- a/cn/docs/clients/_print/index.html
+++ b/cn/docs/clients/_print/index.html
@@ -1,7 +1,7 @@
 API | HugeGraph
 

1 - HugeGraph RESTful API

HugeGraph-Server通过HugeGraph-API基于HTTP协议为Client提供操作图的接口,主要包括元数据和 -图数据的增删改查,遍历算法,变量,图操作及其他操作。

1.1 - Schema API

1.1 Schema

HugeGraph 提供单一接口获取某个图的全部 Schema 信息,包括:PropertyKey、VertexLabel、EdgeLabel 和 IndexLabel。

Method & Url
GET http://localhost:8080/graphs/{graph_name}/schema
+图数据的增删改查,遍历算法,变量,图操作及其他操作。

除了下方的文档,你还可以通过 localhost:8080/swagger-ui/index.html 访问 swagger-ui 以查看 RESTful API示例可以参考此处

1.1 - Schema API

1.1 Schema

HugeGraph 提供单一接口获取某个图的全部 Schema 信息,包括:PropertyKey、VertexLabel、EdgeLabel 和 IndexLabel。

Method & Url
GET http://localhost:8080/graphs/{graph_name}/schema
 
 e.g: GET http://localhost:8080/graphs/hugegraph/schema
 
Response Status
200
diff --git a/cn/docs/clients/index.xml b/cn/docs/clients/index.xml
index 22fe25b3b..e098bf423 100644
--- a/cn/docs/clients/index.xml
+++ b/cn/docs/clients/index.xml
@@ -1,6 +1,7 @@
 HugeGraph – API/cn/docs/clients/Recent content in API on HugeGraphHugo -- gohugo.ioDocs: HugeGraph RESTful API/cn/docs/clients/restful-api/Mon, 01 Jan 0001 00:00:00 +0000/cn/docs/clients/restful-api/
 <p>HugeGraph-Server通过HugeGraph-API基于HTTP协议为Client提供操作图的接口,主要包括元数据和
-图数据的增删改查,遍历算法,变量,图操作及其他操作。</p>Docs: HugeGraph Java Client/cn/docs/clients/hugegraph-client/Mon, 01 Jan 0001 00:00:00 +0000/cn/docs/clients/hugegraph-client/
+图数据的增删改查,遍历算法,变量,图操作及其他操作。</p>
+<p>除了下方的文档,你还可以通过 <code>localhost:8080/swagger-ui/index.html</code> 访问 <code>swagger-ui</code> 以查看 <code>RESTful API</code>。<a href="/cn/docs/quickstart/hugegraph-server#swaggerui-example">示例可以参考此处</a></p>Docs: HugeGraph Java Client/cn/docs/clients/hugegraph-client/Mon, 01 Jan 0001 00:00:00 +0000/cn/docs/clients/hugegraph-client/
 <p>本文的代码都是<code>java</code>语言写的,但其风格与<code>gremlin(groovy)</code>是非常类似的。用户只需要把代码中的变量声明替换成<code>def</code>或直接去掉,
 就能将<code>java</code>代码转变为<code>groovy</code>;另外就是每一行语句最后可以不加分号,<code>groovy</code>认为一行就是一条语句。
 用户在<code>HugeGraph-Studio</code>中编写的<code>gremlin(groovy)</code>可以参考本文的<code>java</code>代码,下面会举出几个例子。</p>
diff --git a/cn/docs/clients/restful-api/_print/index.html b/cn/docs/clients/restful-api/_print/index.html
index 3a4132ac7..37410ac4e 100644
--- a/cn/docs/clients/restful-api/_print/index.html
+++ b/cn/docs/clients/restful-api/_print/index.html
@@ -1,9 +1,9 @@
 HugeGraph RESTful API | HugeGraph
+除了下方的文档,你还可以通过 localhost:8080/swagger-ui/index.html …">
 

This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

HugeGraph RESTful API

HugeGraph-Server通过HugeGraph-API基于HTTP协议为Client提供操作图的接口,主要包括元数据和 -图数据的增删改查,遍历算法,变量,图操作及其他操作。

1 - Schema API

1.1 Schema

HugeGraph 提供单一接口获取某个图的全部 Schema 信息,包括:PropertyKey、VertexLabel、EdgeLabel 和 IndexLabel。

Method & Url
GET http://localhost:8080/graphs/{graph_name}/schema
+图数据的增删改查,遍历算法,变量,图操作及其他操作。

除了下方的文档,你还可以通过 localhost:8080/swagger-ui/index.html 访问 swagger-ui 以查看 RESTful API示例可以参考此处

1 - Schema API

1.1 Schema

HugeGraph 提供单一接口获取某个图的全部 Schema 信息,包括:PropertyKey、VertexLabel、EdgeLabel 和 IndexLabel。

Method & Url
GET http://localhost:8080/graphs/{graph_name}/schema
 
 e.g: GET http://localhost:8080/graphs/hugegraph/schema
 
Response Status
200
diff --git a/cn/docs/clients/restful-api/index.html b/cn/docs/clients/restful-api/index.html
index 95b4839c2..d37333124 100644
--- a/cn/docs/clients/restful-api/index.html
+++ b/cn/docs/clients/restful-api/index.html
@@ -1,13 +1,13 @@
 HugeGraph RESTful API | HugeGraph
+除了下方的文档,你还可以通过 localhost:8080/swagger-ui/index.html …">
 

HugeGraph RESTful API

HugeGraph-Server通过HugeGraph-API基于HTTP协议为Client提供操作图的接口,主要包括元数据和 -图数据的增删改查,遍历算法,变量,图操作及其他操作。


Last modified July 31, 2023: doc: added cypher api (#280) (18547af3)
+图数据的增删改查,遍历算法,变量,图操作及其他操作。

除了下方的文档,你还可以通过 localhost:8080/swagger-ui/index.html 访问 swagger-ui 以查看 RESTful API示例可以参考此处


diff --git a/cn/docs/config/_print/index.html b/cn/docs/config/_print/index.html index 3cc7752fe..2a1824f59 100644 --- a/cn/docs/config/_print/index.html +++ b/cn/docs/config/_print/index.html @@ -248,7 +248,7 @@
$ ./bin/start-hugegraph.sh
 
 Starting HugeGraphServer...
-Connecting to HugeGraphServer (http://127.0.0.1:18080/graphs)...OK
+Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)...OK
 Started [pid 21614]
 

查看创建的图:

curl http://127.0.0.1:8080/graphs/
 
diff --git a/cn/docs/config/config-guide/index.html b/cn/docs/config/config-guide/index.html
index a98b3e61d..d4f847f84 100644
--- a/cn/docs/config/config-guide/index.html
+++ b/cn/docs/config/config-guide/index.html
@@ -6,12 +6,12 @@
 HugeGraphServer 内部集成了 GremlinServer 和 RestServer,而 gremlin-server.yaml 和 rest-server.properties 就是用来配置这两个 Server 的。
 GremlinServer:GremlinServer 接受用户的 gremlin 语句,解析后转而调用 Core 的代码。 RestServer:提供 RESTful API,根据不同的 HTTP 请求,调用对应的 Core API,如果用户请求体是 gremlin 语句,则会转发给 GremlinServer,实现对图数据的操作。 下面对这三个配置文件逐一介绍。
 2 gremlin-server.yaml gremlin-server.yaml 文件默认的内容如下:
-# host and port of gremlin server, need to be consistent with host and port in rest-server.properties #host: 127.0.0.1 #port: 8182 # Gremlin 查询中的超时时间(以毫秒为单位) evaluationTimeout: 30000 channelizer: org.apache.tinkerpop.gremlin.server.channel.WsAndHttpChannelizer # 不要在此处设置图形,此功能将在支持动态添加图形后再进行处理 graphs: { } scriptEngines: { gremlin-groovy: { staticImports: [ org.">{yesButton.disabled=!0,noButton.disabled=!0},sendFeedback=e=>{if(typeof ga!="function")return;const t={command:"send",hitType:"event",category:"Helpful",action:"click",label:window.location.pathname,value:e};ga(t.command,t.hitType,t.category,t.action,t.label,t.value)};yesButton.addEventListener("click",()=>{yesResponse.classList.add("feedback--response__visible"),disableButtons(),sendFeedback(1)}),noButton.addEventListener("click",()=>{noResponse.classList.add("feedback--response__visible"),disableButtons(),sendFeedback(0)})
+

diff --git a/cn/docs/config/index.xml b/cn/docs/config/index.xml index 0d241d401..122dcf573 100644 --- a/cn/docs/config/index.xml +++ b/cn/docs/config/index.xml @@ -304,7 +304,7 @@ </span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>$ ./bin/start-hugegraph.sh </span></span><span style="display:flex;"><span> </span></span><span style="display:flex;"><span>Starting HugeGraphServer... -</span></span><span style="display:flex;"><span>Connecting to HugeGraphServer <span style="color:#ce5c00;font-weight:bold">(</span>http://127.0.0.1:18080/graphs<span style="color:#ce5c00;font-weight:bold">)</span>...OK +</span></span><span style="display:flex;"><span>Connecting to HugeGraphServer <span style="color:#ce5c00;font-weight:bold">(</span>http://127.0.0.1:8080/graphs<span style="color:#ce5c00;font-weight:bold">)</span>...OK </span></span><span style="display:flex;"><span>Started <span style="color:#ce5c00;font-weight:bold">[</span>pid 21614<span style="color:#ce5c00;font-weight:bold">]</span> </span></span></code></pre></div><p>查看创建的图:</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>curl http://127.0.0.1:8080/graphs/ diff --git "a/cn/docs/images/images-server/621swaggerui\347\244\272\344\276\213.png" "b/cn/docs/images/images-server/621swaggerui\347\244\272\344\276\213.png" new file mode 100644 index 000000000..87a154818 Binary files /dev/null and "b/cn/docs/images/images-server/621swaggerui\347\244\272\344\276\213.png" differ diff --git a/cn/docs/index.xml b/cn/docs/index.xml index c3fa49eaa..306712163 100644 --- a/cn/docs/index.xml +++ b/cn/docs/index.xml @@ -334,7 +334,7 @@ </span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>$ ./bin/start-hugegraph.sh </span></span><span style="display:flex;"><span> </span></span><span style="display:flex;"><span>Starting HugeGraphServer... -</span></span><span style="display:flex;"><span>Connecting to HugeGraphServer <span style="color:#ce5c00;font-weight:bold">(</span>http://127.0.0.1:18080/graphs<span style="color:#ce5c00;font-weight:bold">)</span>...OK +</span></span><span style="display:flex;"><span>Connecting to HugeGraphServer <span style="color:#ce5c00;font-weight:bold">(</span>http://127.0.0.1:8080/graphs<span style="color:#ce5c00;font-weight:bold">)</span>...OK </span></span><span style="display:flex;"><span>Started <span style="color:#ce5c00;font-weight:bold">[</span>pid 21614<span style="color:#ce5c00;font-weight:bold">]</span> </span></span></code></pre></div><p>查看创建的图:</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>curl http://127.0.0.1:8080/graphs/ @@ -1339,17 +1339,18 @@ <p>可选项:</p> <ol> <li>可以使用 <code>docker exec -it graph bash</code> 进入容器完成一些操作</li> -<li>可以使用 <code>docker run -itd --name=graph -p 8080:8080 -e PRELOAD=&quot;true&quot; hugegraph/hugegraph</code> 在启动的时候预加载一个 <strong>内置的</strong>样例图。</li> +<li>可以使用 <code>docker run -itd --name=graph -p 8080:8080 -e PRELOAD=&quot;true&quot; hugegraph/hugegraph</code> 在启动的时候预加载一个<strong>内置的</strong>样例图。可以通过 <code>RESTful API</code> 进行验证。具体步骤可以参考 <a href="/cn/docs/quickstart/hugegraph-server/#511-%E5%90%AF%E5%8A%A8-server-%E7%9A%84%E6%97%B6%E5%80%99%E5%88%9B%E5%BB%BA%E7%A4%BA%E4%BE%8B%E5%9B%BE">5.1.1</a></li> </ol> -<p>另外,我们也可以使用 <code>docker-compose</code>完成部署,使用用 <code>docker-compose up -d</code>, 以下是一个样例的 <code>docker-compose.yml</code>:</p> +<p>另外,如果我们希望能够在一个文件中管理除了 <code>server</code> 之外的其他 Hugegraph 相关的实例,我们也可以使用 <code>docker-compose</code>完成部署,使用命令 <code>docker-compose up -d</code>,(当然只配置 <code>server</code> 也是可以的)以下是一个样例的 <code>docker-compose.yml</code>:</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#204a87;font-weight:bold">version</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#4e9a06">&#39;3&#39;</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"></span><span style="color:#204a87;font-weight:bold">services</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">graph</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">image</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">hugegraph/hugegraph</span><span style="color:#f8f8f8;text-decoration:underline"> -</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic">#environment:</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic"># environment:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic"># - PRELOAD=true</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic"># PRELOAD 为可选参数,为 True 时可以在启动的时候预加载一个内置的样例图</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">ports</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> -</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">18080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span></code></pre></div><h4 id="32-下载-tar-包">3.2 下载 tar 包</h4> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#8f5902;font-style:italic"># use the latest version, here is 1.0.0 for example</span> </span></span><span style="display:flex;"><span>wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz @@ -1401,17 +1402,17 @@ <p>详细的配置介绍请参考<a href="/docs/config/config-guide">配置文档</a>及<a href="/docs/config/config-option">配置项介绍</a>。</p> <h3 id="5-启动">5 启动</h3> <h4 id="51-使用-docker">5.1 使用 Docker</h4> -<p>在 <a href="#31-%E4%BD%BF%E7%94%A8-docker-%E5%AE%B9%E5%99%A8-%E6%8E%A8%E8%8D%90">3.1 使用 Docker 容器</a>中,我们已经介绍了 如何使用 <code>docker</code> 部署 <code>hugegraph-server</code>, 我们还可以设置参数在 sever 启动的时候加载样例图</p> +<p>在 <a href="#31-%E4%BD%BF%E7%94%A8-docker-%E5%AE%B9%E5%99%A8-%E6%8E%A8%E8%8D%90">3.1 使用 Docker 容器</a>中,我们已经介绍了如何使用 <code>docker</code> 部署 <code>hugegraph-server</code>, 我们还可以设置参数在 sever 启动的时候加载样例图</p> <h5 id="511-启动-server-的时候创建示例图">5.1.1 启动 server 的时候创建示例图</h5> <p>在 docker 启动的时候设置环境变量 <code>PRELOAD=true</code>, 从而实现启动脚本的时候加载数据。</p> <ol> <li> <p>使用<code>docker run</code></p> -<p>使用 <code>docker run -itd --name=graph -p 18080:8080 -e PRELOAD=true hugegraph/hugegraph:latest</code></p> +<p>使用 <code>docker run -itd --name=graph -p 8080:8080 -e PRELOAD=true hugegraph/hugegraph:latest</code></p> </li> <li> <p>使用<code>docker-compose</code></p> -<p>创建<code>docker-compose.yml</code>,具体文件如下</p> +<p>创建<code>docker-compose.yml</code>,具体文件如下,在环境变量中设置 PRELOAD=true。其中,<a href="https://github.com/apache/incubator-hugegraph/blob/master/hugegraph-dist/src/assembly/static/scripts/example.groovy"><code>example.groovy</code></a> 是一个预定义的脚本,用于预加载样例数据。如果有需要,可以通过挂载新的 <code>example.groovy</code> 脚本改变预加载的数据。</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#204a87;font-weight:bold">version</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#4e9a06">&#39;3&#39;</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">services</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">graph</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> @@ -1419,13 +1420,15 @@ </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">container_name</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">graph</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">environment</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#000">PRELOAD=true</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">volumes</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#000">/path/to/yourscript:/hugegraph/scripts/example.groovy</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">ports</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> -</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">18080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span></code></pre></div><p>使用命令 <code>docker-compose up -d</code> 启动容器</p> </li> </ol> <p>使用 RESTful API 请求 <code>HugeGraphServer</code> 得到如下结果:</p> -<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-javascript" data-lang="javascript"><span style="display:flex;"><span><span style="color:#ce5c00;font-weight:bold">&gt;</span> <span style="color:#000">curl</span> <span style="color:#4e9a06">&#34;http://localhost:18080/graphs/hugegraph/graph/vertices&#34;</span> <span style="color:#ce5c00;font-weight:bold">|</span> <span style="color:#000">gunzip</span> +<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-javascript" data-lang="javascript"><span style="display:flex;"><span><span style="color:#ce5c00;font-weight:bold">&gt;</span> <span style="color:#000">curl</span> <span style="color:#4e9a06">&#34;http://localhost:8080/graphs/hugegraph/graph/vertices&#34;</span> <span style="color:#ce5c00;font-weight:bold">|</span> <span style="color:#000">gunzip</span> </span></span><span style="display:flex;"><span> </span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span><span style="color:#4e9a06">&#34;vertices&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#000;font-weight:bold">[{</span><span style="color:#4e9a06">&#34;id&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;2:lop&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;label&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;software&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;type&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;vertex&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;properties&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#000;font-weight:bold">{</span><span style="color:#4e9a06">&#34;name&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;lop&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;lang&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;java&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;price&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">328</span><span style="color:#000;font-weight:bold">}},{</span><span style="color:#4e9a06">&#34;id&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;1:josh&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;label&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;person&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;type&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;vertex&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;properties&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#000;font-weight:bold">{</span><span style="color:#4e9a06">&#34;name&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;josh&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;age&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">32</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;city&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;Beijing&#34;</span><span style="color:#000;font-weight:bold">}},{</span><span style="color:#4e9a06">&#34;id&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;1:marko&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;label&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;person&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;type&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;vertex&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;properties&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#000;font-weight:bold">{</span><span style="color:#4e9a06">&#34;name&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;marko&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;age&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">29</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;city&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;Beijing&#34;</span><span style="color:#000;font-weight:bold">}},{</span><span style="color:#4e9a06">&#34;id&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;1:peter&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;label&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;person&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;type&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;vertex&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;properties&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#000;font-weight:bold">{</span><span style="color:#4e9a06">&#34;name&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;peter&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;age&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">35</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;city&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;Shanghai&#34;</span><span style="color:#000;font-weight:bold">}},{</span><span style="color:#4e9a06">&#34;id&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;1:vadas&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;label&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;person&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;type&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;vertex&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;properties&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#000;font-weight:bold">{</span><span style="color:#4e9a06">&#34;name&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;vadas&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;age&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">27</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;city&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;Hongkong&#34;</span><span style="color:#000;font-weight:bold">}},{</span><span style="color:#4e9a06">&#34;id&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;2:ripple&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;label&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;software&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;type&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;vertex&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;properties&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#000;font-weight:bold">{</span><span style="color:#4e9a06">&#34;name&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;ripple&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;lang&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;java&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;price&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">199</span><span style="color:#000;font-weight:bold">}}]}</span> </span></span></code></pre></div><p>代表创建示例图成功。</p> @@ -1446,7 +1449,7 @@ </span></span><span style="display:flex;"><span>serializer=binary </span></span><span style="display:flex;"><span>rocksdb.data_path=. </span></span><span style="display:flex;"><span>rocksdb.wal_path=. -</span></span></code></pre></div><p>初始化数据库(仅第一次启动时需要)</p> +</span></span></code></pre></div><p>初始化数据库(第一次启动时或在 <code>conf/graphs/</code> 下手动添加了新配置时需要进行初始化)</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span>bin/init-store.sh </span></span></code></pre></div><p>启动 server</p> @@ -1473,7 +1476,7 @@ </span></span><span style="display:flex;"><span>#hbase.enable_partition=true </span></span><span style="display:flex;"><span>#hbase.vertex_partitions=10 </span></span><span style="display:flex;"><span>#hbase.edge_partitions=30 -</span></span></code></pre></div><p>初始化数据库(仅第一次启动时需要)</p> +</span></span></code></pre></div><p>初始化数据库(第一次启动时或在 <code>conf/graphs/</code> 下手动添加了新配置时需要进行初始化)</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span>bin/init-store.sh </span></span></code></pre></div><p>启动 server</p> @@ -1505,7 +1508,7 @@ </span></span><span style="display:flex;"><span>jdbc.reconnect_max_times=3 </span></span><span style="display:flex;"><span>jdbc.reconnect_interval=3 </span></span><span style="display:flex;"><span>jdbc.ssl_mode=false -</span></span></code></pre></div><p>初始化数据库(仅第一次启动时需要)</p> +</span></span></code></pre></div><p>初始化数据库(第一次启动时或在 <code>conf/graphs/</code> 下手动添加了新配置时需要进行初始化)</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span>bin/init-store.sh </span></span></code></pre></div><p>启动 server</p> @@ -1533,7 +1536,7 @@ </span></span><span style="display:flex;"><span> </span></span><span style="display:flex;"><span>#cassandra.keyspace.strategy=SimpleStrategy </span></span><span style="display:flex;"><span>#cassandra.keyspace.replication=3 -</span></span></code></pre></div><p>初始化数据库(仅第一次启动时需要)</p> +</span></span></code></pre></div><p>初始化数据库(第一次启动时或在 <code>conf/graphs/</code> 下手动添加了新配置时需要进行初始化)</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span>bin/init-store.sh </span></span><span style="display:flex;"><span>Initing HugeGraph Store... @@ -1594,7 +1597,7 @@ </span></span><span style="display:flex;"><span>#cassandra.keyspace.strategy=SimpleStrategy </span></span><span style="display:flex;"><span>#cassandra.keyspace.replication=3 </span></span></code></pre></div><p>由于 scylladb 数据库本身就是基于 cassandra 的&quot;优化版&quot;,如果用户未安装 scylladb,也可以直接使用 cassandra 作为后端存储,只需要把 backend 和 serializer 修改为 scylladb,host 和 post 指向 cassandra 集群的 seeds 和 port 即可,但是并不建议这样做,这样发挥不出 scylladb 本身的优势了。</p> -<p>初始化数据库(仅第一次启动时需要)</p> +<p>初始化数据库(第一次启动时或在 <code>conf/graphs/</code> 下手动添加了新配置时需要进行初始化)</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span>bin/init-store.sh </span></span></code></pre></div><p>启动 server</p> @@ -1694,7 +1697,12 @@ </span></span><span style="display:flex;"><span> <span style="color:#a40000">...</span> </span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">]</span> </span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span> -</span></span></code></pre></div><p>详细的 API 请参考 <a href="/docs/clients/restful-api">RESTful-API</a> 文档</p> +</span></span></code></pre></div><p id="swaggerui-example"></p> +<p>详细的 API 请参考 <a href="/docs/clients/restful-api">RESTful-API</a> 文档。</p> +<p>另外也可以通过访问 <code>localhost:8080/swagger-ui/index.html</code> 查看 API。</p> +<div style="text-align: center;"> +<img src="/docs/images/images-server/621swaggerui示例.png" alt="image"> +</div> <h3 id="7-停止-server">7 停止 Server</h3> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#000">$cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span><span style="color:#000">$bin</span>/stop-hugegraph.sh @@ -7510,7 +7518,8 @@ HugeGraph Toolchain 版本:toolchain-1.0.0</p> </ul> <h4 id="21-使用-docker-推荐">2.1 使用 Docker (推荐)</h4> <blockquote> -<p><strong>特别注意</strong>: 如果使用 docker 启动 hubble,且 hubble 和 server 位于同一宿主机,在后续 hubble 页面中设置 graph 的 hostname 的时候请不要直接设置 <code>localhost/127.0.0.1</code>,这将指向 hubble 容器内部而非宿主机,导致无法连接到 server. 如果 hubble 和 server 在同一 docker 网络下,则可以直接使用<code>container_name</code>作为主机名,端口则为 8080. 或者也可以使用宿主机 ip 作为主机名,此时端口号为宿主机为 server 配置的端口</p> +<p><strong>特别注意</strong>: docker 模式下,若 hubble 和 server 在同一宿主机,hubble 页面中设置 graph 的 <code>hostname</code> <strong>不能设置</strong>为 <code>localhost/127.0.0.1</code>,因这会指向 hubble <strong>容器内部</strong>而非宿主机,导致无法连接到 server.</p> +<p>若 hubble 和 server 在同一 docker 网络下,<strong>推荐</strong>直接使用<code>container_name</code> (如下例的 <code>graph</code>) 作为主机名。或者也可以使用 <strong>宿主机 IP</strong> 作为主机名,此时端口号为宿主机给 server 配置的端口</p> </blockquote> <p>我们可以使用 <code>docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble</code> 快速启动 <a href="https://hub.docker.com/r/hugegraph/hubble">hubble</a>.</p> <p>或者使用 docker-compose 启动 hubble,另外如果 hubble 和 graph 在同一个 docker 网络下,可以使用 graph 的 contain_name 进行访问,而不需要宿主机的 ip</p> @@ -7521,7 +7530,7 @@ HugeGraph Toolchain 版本:toolchain-1.0.0</p> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">image</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">hugegraph/hugegraph</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">container_name</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">graph</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">ports</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> -</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">18080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">hubble</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">image</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">hugegraph/hubble</span><span style="color:#f8f8f8;text-decoration:underline"> @@ -7580,6 +7589,9 @@ HugeGraph Toolchain 版本:toolchain-1.0.0</p> <div style="text-align: center;"> <img src="/docs/images/images-hubble/311图创建2.png" alt="image"> </div> +<blockquote> +<p><strong>注意</strong>:如果使用 docker 启动 <code>hubble</code>,且 <code>server</code> 和 <code>hubble</code> 位于同一宿主机,不能直接使用 <code>localhost/127.0.0.1</code> 作为主机名。如果 <code>hubble</code> 和 <code>server</code> 在同一 docker 网络下,则可以直接使用 container_name 作为主机名,端口则为 8080。或者也可以使用宿主机 ip 作为主机名,此时端口为宿主机为 server 配置的端口</p> +</blockquote> <h5 id="412图访问">4.1.2 图访问</h5> <p>实现图空间的信息访问,进入后,可进行图的多维查询分析、元数据管理、数据导入、算法分析等操作。</p> <div style="text-align: center;"> @@ -7686,6 +7698,9 @@ HugeGraph Toolchain 版本:toolchain-1.0.0</p> <h5 id="425索引类型">4.2.5 索引类型</h5> <p>展示顶点类型和边类型的顶点索引和边索引。</p> <h4 id="43数据导入">4.3 数据导入</h4> +<blockquote> +<p><strong>注意</strong>:目前推荐使用 <a href="/cn/docs/quickstart/hugegraph-loader">hugegraph-loader</a> 进行正式数据导入, hubble 内置的导入用来做<strong>测试</strong>和<strong>简单上手</strong></p> +</blockquote> <p>数据导入的使用流程如下:</p> <center> <img src="/docs/images/images-hubble/33导入流程.png" alt="image"> diff --git a/cn/docs/quickstart/_print/index.html b/cn/docs/quickstart/_print/index.html index ab3772130..3ff2ecb29 100644 --- a/cn/docs/quickstart/_print/index.html +++ b/cn/docs/quickstart/_print/index.html @@ -1,14 +1,15 @@ Quick Start | HugeGraph

1 - HugeGraph-Server Quick Start

1 HugeGraph-Server 概述

HugeGraph-Server 是 HugeGraph 项目的核心部分,包含 Core、Backend、API 等子模块。

Core 模块是 Tinkerpop 接口的实现,Backend 模块用于管理数据存储,目前支持的后端包括:Memory、Cassandra、ScyllaDB 以及 RocksDB,API 模块提供 HTTP Server,将 Client 的 HTTP 请求转化为对 Core 的调用。

文档中会大量出现 HugeGraph-ServerHugeGraphServer 这两种写法,其他组件也类似。这两种写法含义上并无大的差异,可以这么区分:HugeGraph-Server 表示服务端相关组件代码,HugeGraphServer 表示服务进程。

2 依赖

2.1 安装 Java 11 (JDK 11)

请优先考虑在 Java 11 的环境上启动 HugeGraph-Server,目前同时保留对 Java 8 的兼容

在往下阅读之前务必执行 java -version 命令查看 jdk 版本

java -version
-

3 部署

有四种方式可以部署 HugeGraph-Server 组件:

  • 方式 1:使用 Docker 容器 (推荐)
  • 方式 2:下载 tar 包
  • 方式 3:源码编译
  • 方式 4:使用 tools 工具部署 (Outdated)

3.1 使用 Docker 容器 (推荐)

可参考 Docker 部署方式

我们可以使用 docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph 去快速启动一个内置了 RocksDBHugegraph server.

可选项:

  1. 可以使用 docker exec -it graph bash 进入容器完成一些操作
  2. 可以使用 docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph 在启动的时候预加载一个 内置的样例图。

另外,我们也可以使用 docker-compose完成部署,使用用 docker-compose up -d, 以下是一个样例的 docker-compose.yml:

version: '3'
+

3 部署

有四种方式可以部署 HugeGraph-Server 组件:

  • 方式 1:使用 Docker 容器 (推荐)
  • 方式 2:下载 tar 包
  • 方式 3:源码编译
  • 方式 4:使用 tools 工具部署 (Outdated)

3.1 使用 Docker 容器 (推荐)

可参考 Docker 部署方式

我们可以使用 docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph 去快速启动一个内置了 RocksDBHugegraph server.

可选项:

  1. 可以使用 docker exec -it graph bash 进入容器完成一些操作
  2. 可以使用 docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph 在启动的时候预加载一个内置的样例图。可以通过 RESTful API 进行验证。具体步骤可以参考 5.1.1

另外,如果我们希望能够在一个文件中管理除了 server 之外的其他 Hugegraph 相关的实例,我们也可以使用 docker-compose完成部署,使用命令 docker-compose up -d,(当然只配置 server 也是可以的)以下是一个样例的 docker-compose.yml:

version: '3'
 services:
   graph:
     image: hugegraph/hugegraph
-    #environment:
+    # environment:
     #  - PRELOAD=true
+    # PRELOAD 为可选参数,为 True 时可以在启动的时候预加载一个内置的样例图
     ports:
-      - 18080:8080
+      - 8080:8080
 

3.2 下载 tar 包

# use the latest version, here is 1.0.0 for example
 wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz
 tar zxf *hugegraph*.tar.gz
@@ -41,23 +42,25 @@
 # enter the tool's package
 cd *hugegraph*/*tool* 
 

注:${version} 为版本号,最新版本号可参考 Download 页面,或直接从 Download 页面点击链接下载

HugeGraph-Tools 的总入口脚本是 bin/hugegraph,用户可以使用 help 子命令查看其用法,这里只介绍一键部署的命令。

bin/hugegraph deploy -v {hugegraph-version} -p {install-path} [-u {download-path-prefix}]
-

{hugegraph-version} 表示要部署的 HugeGraphServer 及 HugeGraphStudio 的版本,用户可查看 conf/version-mapping.yaml 文件获取版本信息,{install-path} 指定 HugeGraphServer 及 HugeGraphStudio 的安装目录,{download-path-prefix} 可选,指定 HugeGraphServer 及 HugeGraphStudio tar 包的下载地址,不提供时使用默认下载地址,比如要启动 0.6 版本的 HugeGraph-Server 及 HugeGraphStudio 将上述命令写为 bin/hugegraph deploy -v 0.6 -p services 即可。

4 配置

如果需要快速启动 HugeGraph 仅用于测试,那么只需要进行少数几个配置项的修改即可(见下一节)。

详细的配置介绍请参考配置文档配置项介绍

5 启动

5.1 使用 Docker

3.1 使用 Docker 容器中,我们已经介绍了 如何使用 docker 部署 hugegraph-server, 我们还可以设置参数在 sever 启动的时候加载样例图

5.1.1 启动 server 的时候创建示例图

在 docker 启动的时候设置环境变量 PRELOAD=true, 从而实现启动脚本的时候加载数据。

  1. 使用docker run

    使用 docker run -itd --name=graph -p 18080:8080 -e PRELOAD=true hugegraph/hugegraph:latest

  2. 使用docker-compose

    创建docker-compose.yml,具体文件如下

    version: '3'
    +

    {hugegraph-version} 表示要部署的 HugeGraphServer 及 HugeGraphStudio 的版本,用户可查看 conf/version-mapping.yaml 文件获取版本信息,{install-path} 指定 HugeGraphServer 及 HugeGraphStudio 的安装目录,{download-path-prefix} 可选,指定 HugeGraphServer 及 HugeGraphStudio tar 包的下载地址,不提供时使用默认下载地址,比如要启动 0.6 版本的 HugeGraph-Server 及 HugeGraphStudio 将上述命令写为 bin/hugegraph deploy -v 0.6 -p services 即可。

    4 配置

    如果需要快速启动 HugeGraph 仅用于测试,那么只需要进行少数几个配置项的修改即可(见下一节)。

    详细的配置介绍请参考配置文档配置项介绍

    5 启动

    5.1 使用 Docker

    3.1 使用 Docker 容器中,我们已经介绍了如何使用 docker 部署 hugegraph-server, 我们还可以设置参数在 sever 启动的时候加载样例图

    5.1.1 启动 server 的时候创建示例图

    在 docker 启动的时候设置环境变量 PRELOAD=true, 从而实现启动脚本的时候加载数据。

    1. 使用docker run

      使用 docker run -itd --name=graph -p 8080:8080 -e PRELOAD=true hugegraph/hugegraph:latest

    2. 使用docker-compose

      创建docker-compose.yml,具体文件如下,在环境变量中设置 PRELOAD=true。其中,example.groovy 是一个预定义的脚本,用于预加载样例数据。如果有需要,可以通过挂载新的 example.groovy 脚本改变预加载的数据。

      version: '3'
         services:
           graph:
             image: hugegraph/hugegraph:latest
             container_name: graph
             environment:
               - PRELOAD=true
      +      volumes:
      +        - /path/to/yourscript:/hugegraph/scripts/example.groovy
             ports:
      -        - 18080:8080
      -

      使用命令 docker-compose up -d 启动容器

    使用 RESTful API 请求 HugeGraphServer 得到如下结果:

    > curl "http://localhost:18080/graphs/hugegraph/graph/vertices" | gunzip
    +        - 8080:8080
    +

    使用命令 docker-compose up -d 启动容器

使用 RESTful API 请求 HugeGraphServer 得到如下结果:

> curl "http://localhost:8080/graphs/hugegraph/graph/vertices" | gunzip
 
 {"vertices":[{"id":"2:lop","label":"software","type":"vertex","properties":{"name":"lop","lang":"java","price":328}},{"id":"1:josh","label":"person","type":"vertex","properties":{"name":"josh","age":32,"city":"Beijing"}},{"id":"1:marko","label":"person","type":"vertex","properties":{"name":"marko","age":29,"city":"Beijing"}},{"id":"1:peter","label":"person","type":"vertex","properties":{"name":"peter","age":35,"city":"Shanghai"}},{"id":"1:vadas","label":"person","type":"vertex","properties":{"name":"vadas","age":27,"city":"Hongkong"}},{"id":"2:ripple","label":"software","type":"vertex","properties":{"name":"ripple","lang":"java","price":199}}]}
 

代表创建示例图成功。

5.2 使用启动脚本启动

启动分为"首次启动"和"非首次启动",这么区分是因为在第一次启动前需要初始化后端数据库,然后启动服务。

而在人为停掉服务后,或者其他原因需要再次启动服务时,因为后端数据库是持久化存在的,直接启动服务即可。

HugeGraphServer 启动时会连接后端存储并尝试检查后端存储版本号,如果未初始化后端或者后端已初始化但版本不匹配时(旧版本数据),HugeGraphServer 会启动失败,并给出错误信息。

如果需要外部访问 HugeGraphServer,请修改 rest-server.propertiesrestserver.url 配置项(默认为 http://127.0.0.1:8080),修改成机器名或 IP 地址。

由于各种后端所需的配置(hugegraph.properties)及启动步骤略有不同,下面逐一对各后端的配置及启动做介绍。

5.2.1 RocksDB
点击展开/折叠 RocksDB 配置及启动方法

RocksDB 是一个嵌入式的数据库,不需要手动安装部署,要求 GCC 版本 >= 4.3.0(GLIBCXX_3.4.10),如不满足,需要提前升级 GCC

修改 hugegraph.properties

backend=rocksdb
 serializer=binary
 rocksdb.data_path=.
 rocksdb.wal_path=.
-

初始化数据库(仅第一次启动时需要)

cd *hugegraph-${version}
+

初始化数据库(第一次启动时或在 conf/graphs/ 下手动添加了新配置时需要进行初始化)

cd *hugegraph-${version}
 bin/init-store.sh
 

启动 server

bin/start-hugegraph.sh
 Starting HugeGraphServer...
@@ -73,7 +76,7 @@
 #hbase.enable_partition=true
 #hbase.vertex_partitions=10
 #hbase.edge_partitions=30
-

初始化数据库(仅第一次启动时需要)

cd *hugegraph-${version}
+

初始化数据库(第一次启动时或在 conf/graphs/ 下手动添加了新配置时需要进行初始化)

cd *hugegraph-${version}
 bin/init-store.sh
 

启动 server

bin/start-hugegraph.sh
 Starting HugeGraphServer...
@@ -91,7 +94,7 @@
 jdbc.reconnect_max_times=3
 jdbc.reconnect_interval=3
 jdbc.ssl_mode=false
-

初始化数据库(仅第一次启动时需要)

cd *hugegraph-${version}
+

初始化数据库(第一次启动时或在 conf/graphs/ 下手动添加了新配置时需要进行初始化)

cd *hugegraph-${version}
 bin/init-store.sh
 

启动 server

bin/start-hugegraph.sh
 Starting HugeGraphServer...
@@ -109,7 +112,7 @@
 
 #cassandra.keyspace.strategy=SimpleStrategy
 #cassandra.keyspace.replication=3
-

初始化数据库(仅第一次启动时需要)

cd *hugegraph-${version}
+

初始化数据库(第一次启动时或在 conf/graphs/ 下手动添加了新配置时需要进行初始化)

cd *hugegraph-${version}
 bin/init-store.sh
 Initing HugeGraph Store...
 2017-12-01 11:26:51 1424  [main] [INFO ] org.apache.hugegraph.HugeGraph [] - Opening backend store: 'cassandra'
@@ -149,7 +152,7 @@
 
 #cassandra.keyspace.strategy=SimpleStrategy
 #cassandra.keyspace.replication=3
-

由于 scylladb 数据库本身就是基于 cassandra 的"优化版",如果用户未安装 scylladb,也可以直接使用 cassandra 作为后端存储,只需要把 backend 和 serializer 修改为 scylladb,host 和 post 指向 cassandra 集群的 seeds 和 port 即可,但是并不建议这样做,这样发挥不出 scylladb 本身的优势了。

初始化数据库(仅第一次启动时需要)

cd *hugegraph-${version}
+

由于 scylladb 数据库本身就是基于 cassandra 的"优化版",如果用户未安装 scylladb,也可以直接使用 cassandra 作为后端存储,只需要把 backend 和 serializer 修改为 scylladb,host 和 post 指向 cassandra 集群的 seeds 和 port 即可,但是并不建议这样做,这样发挥不出 scylladb 本身的优势了。

初始化数据库(第一次启动时或在 conf/graphs/ 下手动添加了新配置时需要进行初始化)

cd *hugegraph-${version}
 bin/init-store.sh
 

启动 server

bin/start-hugegraph.sh
 Starting HugeGraphServer...
@@ -217,7 +220,7 @@
         ...
     ]
 }
-

详细的 API 请参考 RESTful-API 文档

7 停止 Server

$cd *hugegraph-${version}
+

详细的 API 请参考 RESTful-API 文档。

另外也可以通过访问 localhost:8080/swagger-ui/index.html 查看 API。

image

7 停止 Server

$cd *hugegraph-${version}
 $bin/stop-hugegraph.sh
 

8 使用 IntelliJ IDEA 调试 Server

请参考在 IDEA 中配置 Server 开发环境

2 - HugeGraph-Loader Quick Start

1 HugeGraph-Loader 概述

HugeGraph-Loader 是 HugeGraph 的数据导入组件,能够将多种数据源的数据转化为图的顶点和边并批量导入到图数据库中。

目前支持的数据源包括:

  • 本地磁盘文件或目录,支持 TEXT、CSV 和 JSON 格式的文件,支持压缩文件
  • HDFS 文件或目录,支持压缩文件
  • 主流关系型数据库,如 MySQL、PostgreSQL、Oracle、SQL Server

本地磁盘文件和 HDFS 文件支持断点续传。

后面会具体说明。

注意:使用 HugeGraph-Loader 需要依赖 HugeGraph Server 服务,下载和启动 Server 请参考 HugeGraph-Server Quick Start

2 获取 HugeGraph-Loader

有两种方式可以获取 HugeGraph-Loader:

  • 下载已编译的压缩包
  • 克隆源码编译安装

2.1 下载已编译的压缩包

下载最新版本的 HugeGraph-Toolchain Release 包,里面包含了 loader + tool + hubble 全套工具,如果你已经下载,可跳过重复步骤

wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
 tar zxf *hugegraph*.tar.gz
@@ -695,13 +698,13 @@
 --deploy-mode cluster --name spark-hugegraph-loader --file ./hugegraph.json \
 --username admin --token admin --host xx.xx.xx.xx --port 8093 \
 --graph graph-test --num-executors 6 --executor-cores 16 --executor-memory 15g
-

3 - HugeGraph-Hubble Quick Start

1 HugeGraph-Hubble 概述

HugeGraph 是一款面向分析型,支持批量操作的图数据库系统,它由百度安全团队自主研发,全面支持Apache TinkerPop3框架和Gremlin图查询语言,提供导出、备份、恢复等完善的工具链生态,有效解决海量图数据的存储、查询和关联分析需求。HugeGraph 广泛应用于银行券商的风控打击、保险理赔、推荐搜索、公安犯罪打击、知识图谱构建、网络安全、IT 运维等领域,致力于让更多行业、组织及用户享受到更广泛的数据综合价值。

HugeGraph-Hubble 是 HugeGraph 的一站式可视化分析平台,平台涵盖了从数据建模,到数据快速导入,再到数据的在线、离线分析、以及图的统一管理的全过程,实现了图应用的全流程向导式操作,旨在提升用户的使用流畅度,降低用户的使用门槛,提供更为高效易用的使用体验。

平台主要包括以下模块:

图管理

图管理模块通过图的创建,连接平台与图数据,实现多图的统一管理,并实现图的访问、编辑、删除、查询操作。

元数据建模

元数据建模模块通过创建属性库,顶点类型,边类型,索引类型,实现图模型的构建与管理,平台提供两种模式,列表模式和图模式,可实时展示元数据模型,更加直观。同时还提供了跨图的元数据复用功能,省去相同元数据繁琐的重复创建过程,极大地提升建模效率,增强易用性。

数据导入

数据导入是将用户的业务数据转化为图的顶点和边并插入图数据库中,平台提供了向导式的可视化导入模块,通过创建导入任务,实现导入任务的管理及多个导入任务的并行运行,提高导入效能。进入导入任务后,只需跟随平台步骤提示,按需上传文件,填写内容,就可轻松实现图数据的导入过程,同时支持断点续传,错误重试机制等,降低导入成本,提升效率。

图分析

通过输入图遍历语言 Gremlin 可实现图数据的高性能通用分析,并提供顶点的定制化多维路径查询等功能,提供 3 种图结果展示方式,包括:图形式、表格形式、Json 形式,多维度展示数据形态,满足用户使用的多种场景需求。提供运行记录及常用语句收藏等功能,实现图操作的可追溯,以及查询输入的复用共享,快捷高效。支持图数据的导出,导出格式为 Json 格式。

任务管理

对于需要遍历全图的 Gremlin 任务,索引的创建与重建等耗时较长的异步任务,平台提供相应的任务管理功能,实现异步任务的统一的管理与结果查看。

2 部署

有三种方式可以部署hugegraph-hubble

  • 使用 docker (推荐)
  • 下载 toolchain 二进制包
  • 源码编译

2.1 使用 Docker (推荐)

特别注意: 如果使用 docker 启动 hubble,且 hubble 和 server 位于同一宿主机,在后续 hubble 页面中设置 graph 的 hostname 的时候请不要直接设置 localhost/127.0.0.1,这将指向 hubble 容器内部而非宿主机,导致无法连接到 server. 如果 hubble 和 server 在同一 docker 网络下,则可以直接使用container_name作为主机名,端口则为 8080. 或者也可以使用宿主机 ip 作为主机名,此时端口号为宿主机为 server 配置的端口

我们可以使用 docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble 快速启动 hubble.

或者使用 docker-compose 启动 hubble,另外如果 hubble 和 graph 在同一个 docker 网络下,可以使用 graph 的 contain_name 进行访问,而不需要宿主机的 ip

使用docker-compose up -ddocker-compose.yml如下:

version: '3'
+

3 - HugeGraph-Hubble Quick Start

1 HugeGraph-Hubble 概述

HugeGraph 是一款面向分析型,支持批量操作的图数据库系统,它由百度安全团队自主研发,全面支持Apache TinkerPop3框架和Gremlin图查询语言,提供导出、备份、恢复等完善的工具链生态,有效解决海量图数据的存储、查询和关联分析需求。HugeGraph 广泛应用于银行券商的风控打击、保险理赔、推荐搜索、公安犯罪打击、知识图谱构建、网络安全、IT 运维等领域,致力于让更多行业、组织及用户享受到更广泛的数据综合价值。

HugeGraph-Hubble 是 HugeGraph 的一站式可视化分析平台,平台涵盖了从数据建模,到数据快速导入,再到数据的在线、离线分析、以及图的统一管理的全过程,实现了图应用的全流程向导式操作,旨在提升用户的使用流畅度,降低用户的使用门槛,提供更为高效易用的使用体验。

平台主要包括以下模块:

图管理

图管理模块通过图的创建,连接平台与图数据,实现多图的统一管理,并实现图的访问、编辑、删除、查询操作。

元数据建模

元数据建模模块通过创建属性库,顶点类型,边类型,索引类型,实现图模型的构建与管理,平台提供两种模式,列表模式和图模式,可实时展示元数据模型,更加直观。同时还提供了跨图的元数据复用功能,省去相同元数据繁琐的重复创建过程,极大地提升建模效率,增强易用性。

数据导入

数据导入是将用户的业务数据转化为图的顶点和边并插入图数据库中,平台提供了向导式的可视化导入模块,通过创建导入任务,实现导入任务的管理及多个导入任务的并行运行,提高导入效能。进入导入任务后,只需跟随平台步骤提示,按需上传文件,填写内容,就可轻松实现图数据的导入过程,同时支持断点续传,错误重试机制等,降低导入成本,提升效率。

图分析

通过输入图遍历语言 Gremlin 可实现图数据的高性能通用分析,并提供顶点的定制化多维路径查询等功能,提供 3 种图结果展示方式,包括:图形式、表格形式、Json 形式,多维度展示数据形态,满足用户使用的多种场景需求。提供运行记录及常用语句收藏等功能,实现图操作的可追溯,以及查询输入的复用共享,快捷高效。支持图数据的导出,导出格式为 Json 格式。

任务管理

对于需要遍历全图的 Gremlin 任务,索引的创建与重建等耗时较长的异步任务,平台提供相应的任务管理功能,实现异步任务的统一的管理与结果查看。

2 部署

有三种方式可以部署hugegraph-hubble

  • 使用 docker (推荐)
  • 下载 toolchain 二进制包
  • 源码编译

2.1 使用 Docker (推荐)

特别注意: docker 模式下,若 hubble 和 server 在同一宿主机,hubble 页面中设置 graph 的 hostname 不能设置localhost/127.0.0.1,因这会指向 hubble 容器内部而非宿主机,导致无法连接到 server.

若 hubble 和 server 在同一 docker 网络下,推荐直接使用container_name (如下例的 graph) 作为主机名。或者也可以使用 宿主机 IP 作为主机名,此时端口号为宿主机给 server 配置的端口

我们可以使用 docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble 快速启动 hubble.

或者使用 docker-compose 启动 hubble,另外如果 hubble 和 graph 在同一个 docker 网络下,可以使用 graph 的 contain_name 进行访问,而不需要宿主机的 ip

使用docker-compose up -ddocker-compose.yml如下:

version: '3'
 services:
   server:
     image: hugegraph/hugegraph
     container_name: graph
     ports:
-      - 18080:8080
+      - 8080:8080
 
   hubble:
     image: hugegraph/hubble
@@ -732,7 +735,7 @@
 mvn -e compile package -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -ntp
 cd apache-hugegraph-hubble-incubating*
 

启动hubble

bin/start-hubble.sh -d
-

3 平台使用流程

平台的模块使用流程如下:

image

4 平台使用说明

4.1 图管理

4.1.1 图创建

图管理模块下,点击【创建图】,通过填写图 ID、图名称、主机名、端口号、用户名、密码的信息,实现多图的连接。

image

创建图填写内容如下:

image
4.1.2 图访问

实现图空间的信息访问,进入后,可进行图的多维查询分析、元数据管理、数据导入、算法分析等操作。

image
4.1.3 图管理
  1. 用户通过对图的概览、搜索以及单图的信息编辑与删除,实现图的统一管理。
  2. 搜索范围:可对图名称和 ID 进行搜索。
image

4.2 元数据建模(列表 + 图模式)

4.2.1 模块入口

左侧导航处:

image
4.2.2 属性类型
4.2.2.1 创建
  1. 填写或选择属性名称、数据类型、基数,完成属性的创建。
  2. 创建的属性可作为顶点类型和边类型的属性。

列表模式:

image

图模式:

image
4.2.2.2 复用
  1. 平台提供【复用】功能,可直接复用其他图的元数据。
  2. 选择需要复用的图 ID,继续选择需要复用的属性,之后平台会进行是否冲突的校验,通过后,可实现元数据的复用。

选择复用项:

image

校验复用项:

image
4.2.2.3 管理
  1. 在属性列表中可进行单条删除或批量删除操作。
4.2.3 顶点类型
4.2.3.1 创建
  1. 填写或选择顶点类型名称、ID 策略、关联属性、主键属性,顶点样式、查询结果中顶点下方展示的内容,以及索引的信息:包括是否创建类型索引,及属性索引的具体内容,完成顶点类型的创建。

列表模式:

image

图模式:

image
4.2.3.2 复用
  1. 顶点类型的复用,会将此类型关联的属性和属性索引一并复用。
  2. 复用功能使用方法类似属性的复用,见 3.2.2.2。
4.2.3.3 管理
  1. 可进行编辑操作,顶点样式、关联类型、顶点展示内容、属性索引可编辑,其余不可编辑。

  2. 可进行单条删除或批量删除操作。

image
4.2.4 边类型
4.2.4.1 创建
  1. 填写或选择边类型名称、起点类型、终点类型、关联属性、是否允许多次连接、边样式、查询结果中边下方展示的内容,以及索引的信息:包括是否创建类型索引,及属性索引的具体内容,完成边类型的创建。

列表模式:

image

图模式:

image
4.2.4.2 复用
  1. 边类型的复用,会将此类型的起点类型、终点类型、关联的属性和属性索引一并复用。
  2. 复用功能使用方法类似属性的复用,见 3.2.2.2。
4.2.4.3 管理
  1. 可进行编辑操作,边样式、关联属性、边展示内容、属性索引可编辑,其余不可编辑,同顶点类型。
  2. 可进行单条删除或批量删除操作。
4.2.5 索引类型

展示顶点类型和边类型的顶点索引和边索引。

4.3 数据导入

数据导入的使用流程如下:

image
4.3.1 模块入口

左侧导航处:

image
4.3.2 创建任务
  1. 填写任务名称和备注(非必填),可以创建导入任务。
  2. 可创建多个导入任务,并行导入。
image
4.3.3 上传文件
  1. 上传需要构图的文件,目前支持的格式为 CSV,后续会不断更新。
  2. 可同时上传多个文件。
image
4.3.4 设置数据映射
  1. 对上传的文件分别设置数据映射,包括文件设置和类型设置

  2. 文件设置:勾选或填写是否包含表头、分隔符、编码格式等文件本身的设置内容,均设置默认值,无需手动填写

  3. 类型设置:

    1. 顶点映射和边映射:

      【顶点类型】 :选择顶点类型,并为其 ID 映射上传文件中列数据;

      【边类型】:选择边类型,为其起点类型和终点类型的 ID 列映射上传文件的列数据;

    2. 映射设置:为选定的顶点类型的属性映射上传文件中的列数据,此处,若属性名称与文件的表头名称一致,可自动匹配映射属性,无需手动填选

    3. 完成设置后,显示设置列表,方可进行下一步操作,支持映射的新增、编辑、删除操作

设置映射的填写内容:

image

映射列表:

image
4.3.5 导入数据

导入前需要填写导入设置参数,填写完成后,可开始向图库中导入数据

  1. 导入设置
  • 导入设置参数项如下图所示,均设置默认值,无需手动填写
image
  1. 导入详情
  • 点击开始导入,开始文件的导入任务
  • 导入详情中提供每个上传文件设置的映射类型、导入速度、导入的进度、耗时以及当前任务的具体状态,并可对每个任务进行暂停、继续、停止等操作
  • 若导入失败,可查看具体原因
image

4.4 数据分析

4.4.1 模块入口

左侧导航处:

image
4.4.2 多图切换

通过左侧切换入口,灵活切换多图的操作空间

image
4.4.3 图分析与处理

HugeGraph 支持 Apache TinkerPop3 的图遍历查询语言 Gremlin,Gremlin 是一种通用的图数据库查询语言,通过输入 Gremlin 语句,点击执行,即可执行图数据的查询分析操作,并可实现顶点/边的创建及删除、顶点/边的属性修改等。

Gremlin 查询后,下方为图结果展示区域,提供 3 种图结果展示方式,分别为:【图模式】、【表格模式】、【Json 模式】。

支持缩放、居中、全屏、导出等操作。

【图模式】

image

【表格模式】

image

【Json 模式】

image
4.4.4 数据详情

点击顶点/边实体,可查看顶点/边的数据详情,包括:顶点/边类型,顶点 ID,属性及对应值,拓展图的信息展示维度,提高易用性。

4.4.5 图结果的多维路径查询

除了全局的查询外,可针对查询结果中的顶点进行深度定制化查询以及隐藏操作,实现图结果的定制化挖掘。

右击顶点,出现顶点的菜单入口,可进行展示、查询、隐藏等操作。

  • 展开:点击后,展示与选中点关联的顶点。
  • 查询:通过选择与选中点关联的边类型及边方向,在此条件下,再选择其属性及相应筛选规则,可实现定制化的路径展示。
  • 隐藏:点击后,隐藏选中点及与之关联的边。

双击顶点,也可展示与选中点关联的顶点。

image
4.4.6 新增顶点/边
4.4.6.1 新增顶点

在图区可通过两个入口,动态新增顶点,如下:

  1. 点击图区面板,出现添加顶点入口
  2. 点击右上角的操作栏中的首个图标

通过选择或填写顶点类型、ID 值、属性信息,完成顶点的增加。

入口如下:

image

添加顶点内容如下:

image
4.4.6.2 新增边

右击图结果中的顶点,可增加该点的出边或者入边。

4.4.7 执行记录与收藏的查询
  1. 图区下方记载每次查询记录,包括:查询时间、执行类型、内容、状态、耗时、以及【收藏】和【加载】操作,实现图执行的全方位记录,有迹可循,并可对执行内容快速加载复用
  2. 提供语句的收藏功能,可对常用语句进行收藏操作,方便高频语句快速调用
image

4.5 任务管理

4.5.1 模块入口

左侧导航处:

image
4.5.2 任务管理
  1. 提供异步任务的统一的管理与结果查看,异步任务包括 4 类,分别为:
  • gremlin:Gremlin 任务务
  • algorithm:OLAP 算法任务务
  • remove_schema:删除元数据
  • rebuild_index:重建索引
  1. 列表显示当前图的异步任务信息,包括:任务 ID,任务名称,任务类型,创建时间,耗时,状态,操作,实现对异步任务的管理。
  2. 支持对任务类型和状态进行筛选
  3. 支持搜索任务 ID 和任务名称
  4. 可对异步任务进行删除或批量删除操作
image
4.5.3 Gremlin 异步任务

1.创建任务

  • 数据分析模块,目前支持两种 Gremlin 操作,Gremlin 查询和 Gremlin 任务;若用户切换到 Gremlin 任务,点击执行后,在异步任务中心会建立一条异步任务; +

3 平台使用流程

平台的模块使用流程如下:

image

4 平台使用说明

4.1 图管理

4.1.1 图创建

图管理模块下,点击【创建图】,通过填写图 ID、图名称、主机名、端口号、用户名、密码的信息,实现多图的连接。

image

创建图填写内容如下:

image

注意:如果使用 docker 启动 hubble,且 serverhubble 位于同一宿主机,不能直接使用 localhost/127.0.0.1 作为主机名。如果 hubbleserver 在同一 docker 网络下,则可以直接使用 container_name 作为主机名,端口则为 8080。或者也可以使用宿主机 ip 作为主机名,此时端口为宿主机为 server 配置的端口

4.1.2 图访问

实现图空间的信息访问,进入后,可进行图的多维查询分析、元数据管理、数据导入、算法分析等操作。

image
4.1.3 图管理
  1. 用户通过对图的概览、搜索以及单图的信息编辑与删除,实现图的统一管理。
  2. 搜索范围:可对图名称和 ID 进行搜索。
image

4.2 元数据建模(列表 + 图模式)

4.2.1 模块入口

左侧导航处:

image
4.2.2 属性类型
4.2.2.1 创建
  1. 填写或选择属性名称、数据类型、基数,完成属性的创建。
  2. 创建的属性可作为顶点类型和边类型的属性。

列表模式:

image

图模式:

image
4.2.2.2 复用
  1. 平台提供【复用】功能,可直接复用其他图的元数据。
  2. 选择需要复用的图 ID,继续选择需要复用的属性,之后平台会进行是否冲突的校验,通过后,可实现元数据的复用。

选择复用项:

image

校验复用项:

image
4.2.2.3 管理
  1. 在属性列表中可进行单条删除或批量删除操作。
4.2.3 顶点类型
4.2.3.1 创建
  1. 填写或选择顶点类型名称、ID 策略、关联属性、主键属性,顶点样式、查询结果中顶点下方展示的内容,以及索引的信息:包括是否创建类型索引,及属性索引的具体内容,完成顶点类型的创建。

列表模式:

image

图模式:

image
4.2.3.2 复用
  1. 顶点类型的复用,会将此类型关联的属性和属性索引一并复用。
  2. 复用功能使用方法类似属性的复用,见 3.2.2.2。
4.2.3.3 管理
  1. 可进行编辑操作,顶点样式、关联类型、顶点展示内容、属性索引可编辑,其余不可编辑。

  2. 可进行单条删除或批量删除操作。

image
4.2.4 边类型
4.2.4.1 创建
  1. 填写或选择边类型名称、起点类型、终点类型、关联属性、是否允许多次连接、边样式、查询结果中边下方展示的内容,以及索引的信息:包括是否创建类型索引,及属性索引的具体内容,完成边类型的创建。

列表模式:

image

图模式:

image
4.2.4.2 复用
  1. 边类型的复用,会将此类型的起点类型、终点类型、关联的属性和属性索引一并复用。
  2. 复用功能使用方法类似属性的复用,见 3.2.2.2。
4.2.4.3 管理
  1. 可进行编辑操作,边样式、关联属性、边展示内容、属性索引可编辑,其余不可编辑,同顶点类型。
  2. 可进行单条删除或批量删除操作。
4.2.5 索引类型

展示顶点类型和边类型的顶点索引和边索引。

4.3 数据导入

注意:目前推荐使用 hugegraph-loader 进行正式数据导入, hubble 内置的导入用来做测试简单上手

数据导入的使用流程如下:

image
4.3.1 模块入口

左侧导航处:

image
4.3.2 创建任务
  1. 填写任务名称和备注(非必填),可以创建导入任务。
  2. 可创建多个导入任务,并行导入。
image
4.3.3 上传文件
  1. 上传需要构图的文件,目前支持的格式为 CSV,后续会不断更新。
  2. 可同时上传多个文件。
image
4.3.4 设置数据映射
  1. 对上传的文件分别设置数据映射,包括文件设置和类型设置

  2. 文件设置:勾选或填写是否包含表头、分隔符、编码格式等文件本身的设置内容,均设置默认值,无需手动填写

  3. 类型设置:

    1. 顶点映射和边映射:

      【顶点类型】 :选择顶点类型,并为其 ID 映射上传文件中列数据;

      【边类型】:选择边类型,为其起点类型和终点类型的 ID 列映射上传文件的列数据;

    2. 映射设置:为选定的顶点类型的属性映射上传文件中的列数据,此处,若属性名称与文件的表头名称一致,可自动匹配映射属性,无需手动填选

    3. 完成设置后,显示设置列表,方可进行下一步操作,支持映射的新增、编辑、删除操作

设置映射的填写内容:

image

映射列表:

image
4.3.5 导入数据

导入前需要填写导入设置参数,填写完成后,可开始向图库中导入数据

  1. 导入设置
  • 导入设置参数项如下图所示,均设置默认值,无需手动填写
image
  1. 导入详情
  • 点击开始导入,开始文件的导入任务
  • 导入详情中提供每个上传文件设置的映射类型、导入速度、导入的进度、耗时以及当前任务的具体状态,并可对每个任务进行暂停、继续、停止等操作
  • 若导入失败,可查看具体原因
image

4.4 数据分析

4.4.1 模块入口

左侧导航处:

image
4.4.2 多图切换

通过左侧切换入口,灵活切换多图的操作空间

image
4.4.3 图分析与处理

HugeGraph 支持 Apache TinkerPop3 的图遍历查询语言 Gremlin,Gremlin 是一种通用的图数据库查询语言,通过输入 Gremlin 语句,点击执行,即可执行图数据的查询分析操作,并可实现顶点/边的创建及删除、顶点/边的属性修改等。

Gremlin 查询后,下方为图结果展示区域,提供 3 种图结果展示方式,分别为:【图模式】、【表格模式】、【Json 模式】。

支持缩放、居中、全屏、导出等操作。

【图模式】

image

【表格模式】

image

【Json 模式】

image
4.4.4 数据详情

点击顶点/边实体,可查看顶点/边的数据详情,包括:顶点/边类型,顶点 ID,属性及对应值,拓展图的信息展示维度,提高易用性。

4.4.5 图结果的多维路径查询

除了全局的查询外,可针对查询结果中的顶点进行深度定制化查询以及隐藏操作,实现图结果的定制化挖掘。

右击顶点,出现顶点的菜单入口,可进行展示、查询、隐藏等操作。

  • 展开:点击后,展示与选中点关联的顶点。
  • 查询:通过选择与选中点关联的边类型及边方向,在此条件下,再选择其属性及相应筛选规则,可实现定制化的路径展示。
  • 隐藏:点击后,隐藏选中点及与之关联的边。

双击顶点,也可展示与选中点关联的顶点。

image
4.4.6 新增顶点/边
4.4.6.1 新增顶点

在图区可通过两个入口,动态新增顶点,如下:

  1. 点击图区面板,出现添加顶点入口
  2. 点击右上角的操作栏中的首个图标

通过选择或填写顶点类型、ID 值、属性信息,完成顶点的增加。

入口如下:

image

添加顶点内容如下:

image
4.4.6.2 新增边

右击图结果中的顶点,可增加该点的出边或者入边。

4.4.7 执行记录与收藏的查询
  1. 图区下方记载每次查询记录,包括:查询时间、执行类型、内容、状态、耗时、以及【收藏】和【加载】操作,实现图执行的全方位记录,有迹可循,并可对执行内容快速加载复用
  2. 提供语句的收藏功能,可对常用语句进行收藏操作,方便高频语句快速调用
image

4.5 任务管理

4.5.1 模块入口

左侧导航处:

image
4.5.2 任务管理
  1. 提供异步任务的统一的管理与结果查看,异步任务包括 4 类,分别为:
  • gremlin:Gremlin 任务务
  • algorithm:OLAP 算法任务务
  • remove_schema:删除元数据
  • rebuild_index:重建索引
  1. 列表显示当前图的异步任务信息,包括:任务 ID,任务名称,任务类型,创建时间,耗时,状态,操作,实现对异步任务的管理。
  2. 支持对任务类型和状态进行筛选
  3. 支持搜索任务 ID 和任务名称
  4. 可对异步任务进行删除或批量删除操作
image
4.5.3 Gremlin 异步任务

1.创建任务

  • 数据分析模块,目前支持两种 Gremlin 操作,Gremlin 查询和 Gremlin 任务;若用户切换到 Gremlin 任务,点击执行后,在异步任务中心会建立一条异步任务; 2.任务提交
  • 任务提交成功后,图区部分返回提交结果和任务 ID 3.任务详情
  • 提供【查看】入口,可跳转到任务详情查看当前任务具体执行情况跳转到任务中心后,直接显示当前执行的任务行
image

点击查看入口,跳转到任务管理列表,如下:

image

4.查看结果

  • 结果通过 json 形式展示
4.5.4 OLAP 算法任务

Hubble 上暂未提供可视化的 OLAP 算法执行,可调用 RESTful API 进行 OLAP 类算法任务,在任务管理中通过 ID 找到相应任务,查看进度与结果等。

4.5.5 删除元数据、重建索引

1.创建任务

  • 在元数据建模模块中,删除元数据时,可建立删除元数据的异步任务
image
  • 在编辑已有的顶点/边类型操作中,新增索引时,可建立创建索引的异步任务
image

2.任务详情

  • 确认/保存后,可跳转到任务中心查看当前任务的详情
image

4 - HugeGraph-Client Quick Start

1 HugeGraph-Client 概述

HugeGraph-Client 向 HugeGraph-Server 发出 HTTP 请求,获取并解析 Server 的执行结果。目前仅提供了 Java 版,用户可以使用 HugeGraph-Client 编写 Java 代码操作 HugeGraph,比如元数据和图数据的增删改查,或者执行 gremlin 语句。

2 环境要求

  • java 11 (兼容 java 8)
  • maven 3.5+

3 使用流程

使用 HugeGraph-Client 的基本步骤如下:

  • 新建Eclipse/ IDEA Maven 项目;
  • 在 pom 文件中添加 HugeGraph-Client 依赖;
  • 创建类,调用 HugeGraph-Client 接口;

详细使用过程见下节完整示例。

4 完整示例

4.1 新建 Maven 工程

可以选择 Eclipse 或者 Intellij Idea 创建工程:

4.2 添加 hugegraph-client 依赖

添加 hugegraph-client 依赖


 <dependencies>
diff --git a/cn/docs/quickstart/hugegraph-hubble/index.html b/cn/docs/quickstart/hugegraph-hubble/index.html
index 6601bfbc6..e11d7070a 100644
--- a/cn/docs/quickstart/hugegraph-hubble/index.html
+++ b/cn/docs/quickstart/hugegraph-hubble/index.html
@@ -8,7 +8,8 @@
 图分析 通过输入图遍历语言 Gremlin 可实现图数据的高性能通用分析,并提供顶点的定制化多维路径查询等功能,提供 3 种图结果展示方式,包括:图形式、表格形式、Json 形式,多维度展示数据形态,满足用户使用的多种场景需求。提供运行记录及常用语句收藏等功能,实现图操作的可追溯,以及查询输入的复用共享,快捷高效。支持图数据的导出,导出格式为 Json 格式。
 任务管理 对于需要遍历全图的 Gremlin 任务,索引的创建与重建等耗时较长的异步任务,平台提供相应的任务管理功能,实现异步任务的统一的管理与结果查看。
 2 部署 有三种方式可以部署hugegraph-hubble
-使用 docker (推荐) 下载 toolchain 二进制包 源码编译 2.1 使用 Docker (推荐) 特别注意: 如果使用 docker 启动 hubble,且 hubble 和 server 位于同一宿主机,在后续 hubble 页面中设置 graph 的 hostname 的时候请不要直接设置 localhost/127.0.0.1,这将指向 hubble 容器内部而非宿主机,导致无法连接到 server. 如果 hubble 和 server 在同一 docker 网络下,则可以直接使用container_name作为主机名,端口则为 8080.">
+使用 docker (推荐) 下载 toolchain 二进制包 源码编译 2.1 使用 Docker (推荐) 特别注意: docker 模式下,若 hubble 和 server 在同一宿主机,hubble 页面中设置 graph 的 hostname 不能设置为 localhost/127.0.0.1,因这会指向 hubble 容器内部而非宿主机,导致无法连接到 server.
+若 hubble 和 server 在同一 docker 网络下,推荐直接使用container_name (如下例的 graph) 作为主机名。或者也可以使用 宿主机 IP 作为主机名,此时端口号为宿主机给 server 配置的端口">
 

HugeGraph-Hubble Quick Start

1 HugeGraph-Hubble 概述

HugeGraph 是一款面向分析型,支持批量操作的图数据库系统,它由百度安全团队自主研发,全面支持Apache TinkerPop3框架和Gremlin图查询语言,提供导出、备份、恢复等完善的工具链生态,有效解决海量图数据的存储、查询和关联分析需求。HugeGraph 广泛应用于银行券商的风控打击、保险理赔、推荐搜索、公安犯罪打击、知识图谱构建、网络安全、IT 运维等领域,致力于让更多行业、组织及用户享受到更广泛的数据综合价值。

HugeGraph-Hubble 是 HugeGraph 的一站式可视化分析平台,平台涵盖了从数据建模,到数据快速导入,再到数据的在线、离线分析、以及图的统一管理的全过程,实现了图应用的全流程向导式操作,旨在提升用户的使用流畅度,降低用户的使用门槛,提供更为高效易用的使用体验。

平台主要包括以下模块:

图管理

图管理模块通过图的创建,连接平台与图数据,实现多图的统一管理,并实现图的访问、编辑、删除、查询操作。

元数据建模

元数据建模模块通过创建属性库,顶点类型,边类型,索引类型,实现图模型的构建与管理,平台提供两种模式,列表模式和图模式,可实时展示元数据模型,更加直观。同时还提供了跨图的元数据复用功能,省去相同元数据繁琐的重复创建过程,极大地提升建模效率,增强易用性。

数据导入

数据导入是将用户的业务数据转化为图的顶点和边并插入图数据库中,平台提供了向导式的可视化导入模块,通过创建导入任务,实现导入任务的管理及多个导入任务的并行运行,提高导入效能。进入导入任务后,只需跟随平台步骤提示,按需上传文件,填写内容,就可轻松实现图数据的导入过程,同时支持断点续传,错误重试机制等,降低导入成本,提升效率。

图分析

通过输入图遍历语言 Gremlin 可实现图数据的高性能通用分析,并提供顶点的定制化多维路径查询等功能,提供 3 种图结果展示方式,包括:图形式、表格形式、Json 形式,多维度展示数据形态,满足用户使用的多种场景需求。提供运行记录及常用语句收藏等功能,实现图操作的可追溯,以及查询输入的复用共享,快捷高效。支持图数据的导出,导出格式为 Json 格式。

任务管理

对于需要遍历全图的 Gremlin 任务,索引的创建与重建等耗时较长的异步任务,平台提供相应的任务管理功能,实现异步任务的统一的管理与结果查看。

2 部署

有三种方式可以部署hugegraph-hubble

  • 使用 docker (推荐)
  • 下载 toolchain 二进制包
  • 源码编译

2.1 使用 Docker (推荐)

特别注意: 如果使用 docker 启动 hubble,且 hubble 和 server 位于同一宿主机,在后续 hubble 页面中设置 graph 的 hostname 的时候请不要直接设置 localhost/127.0.0.1,这将指向 hubble 容器内部而非宿主机,导致无法连接到 server. 如果 hubble 和 server 在同一 docker 网络下,则可以直接使用container_name作为主机名,端口则为 8080. 或者也可以使用宿主机 ip 作为主机名,此时端口号为宿主机为 server 配置的端口

我们可以使用 docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble 快速启动 hubble.

或者使用 docker-compose 启动 hubble,另外如果 hubble 和 graph 在同一个 docker 网络下,可以使用 graph 的 contain_name 进行访问,而不需要宿主机的 ip

使用docker-compose up -ddocker-compose.yml如下:

version: '3'
+ Print entire section

HugeGraph-Hubble Quick Start

1 HugeGraph-Hubble 概述

HugeGraph 是一款面向分析型,支持批量操作的图数据库系统,它由百度安全团队自主研发,全面支持Apache TinkerPop3框架和Gremlin图查询语言,提供导出、备份、恢复等完善的工具链生态,有效解决海量图数据的存储、查询和关联分析需求。HugeGraph 广泛应用于银行券商的风控打击、保险理赔、推荐搜索、公安犯罪打击、知识图谱构建、网络安全、IT 运维等领域,致力于让更多行业、组织及用户享受到更广泛的数据综合价值。

HugeGraph-Hubble 是 HugeGraph 的一站式可视化分析平台,平台涵盖了从数据建模,到数据快速导入,再到数据的在线、离线分析、以及图的统一管理的全过程,实现了图应用的全流程向导式操作,旨在提升用户的使用流畅度,降低用户的使用门槛,提供更为高效易用的使用体验。

平台主要包括以下模块:

图管理

图管理模块通过图的创建,连接平台与图数据,实现多图的统一管理,并实现图的访问、编辑、删除、查询操作。

元数据建模

元数据建模模块通过创建属性库,顶点类型,边类型,索引类型,实现图模型的构建与管理,平台提供两种模式,列表模式和图模式,可实时展示元数据模型,更加直观。同时还提供了跨图的元数据复用功能,省去相同元数据繁琐的重复创建过程,极大地提升建模效率,增强易用性。

数据导入

数据导入是将用户的业务数据转化为图的顶点和边并插入图数据库中,平台提供了向导式的可视化导入模块,通过创建导入任务,实现导入任务的管理及多个导入任务的并行运行,提高导入效能。进入导入任务后,只需跟随平台步骤提示,按需上传文件,填写内容,就可轻松实现图数据的导入过程,同时支持断点续传,错误重试机制等,降低导入成本,提升效率。

图分析

通过输入图遍历语言 Gremlin 可实现图数据的高性能通用分析,并提供顶点的定制化多维路径查询等功能,提供 3 种图结果展示方式,包括:图形式、表格形式、Json 形式,多维度展示数据形态,满足用户使用的多种场景需求。提供运行记录及常用语句收藏等功能,实现图操作的可追溯,以及查询输入的复用共享,快捷高效。支持图数据的导出,导出格式为 Json 格式。

任务管理

对于需要遍历全图的 Gremlin 任务,索引的创建与重建等耗时较长的异步任务,平台提供相应的任务管理功能,实现异步任务的统一的管理与结果查看。

2 部署

有三种方式可以部署hugegraph-hubble

  • 使用 docker (推荐)
  • 下载 toolchain 二进制包
  • 源码编译

2.1 使用 Docker (推荐)

特别注意: docker 模式下,若 hubble 和 server 在同一宿主机,hubble 页面中设置 graph 的 hostname 不能设置localhost/127.0.0.1,因这会指向 hubble 容器内部而非宿主机,导致无法连接到 server.

若 hubble 和 server 在同一 docker 网络下,推荐直接使用container_name (如下例的 graph) 作为主机名。或者也可以使用 宿主机 IP 作为主机名,此时端口号为宿主机给 server 配置的端口

我们可以使用 docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble 快速启动 hubble.

或者使用 docker-compose 启动 hubble,另外如果 hubble 和 graph 在同一个 docker 网络下,可以使用 graph 的 contain_name 进行访问,而不需要宿主机的 ip

使用docker-compose up -ddocker-compose.yml如下:

version: '3'
 services:
   server:
     image: hugegraph/hugegraph
     container_name: graph
     ports:
-      - 18080:8080
+      - 8080:8080
 
   hubble:
     image: hugegraph/hubble
@@ -69,9 +72,9 @@
 mvn -e compile package -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -ntp
 cd apache-hugegraph-hubble-incubating*
 

启动hubble

bin/start-hubble.sh -d
-

3 平台使用流程

平台的模块使用流程如下:

image

4 平台使用说明

4.1 图管理

4.1.1 图创建

图管理模块下,点击【创建图】,通过填写图 ID、图名称、主机名、端口号、用户名、密码的信息,实现多图的连接。

image

创建图填写内容如下:

image
4.1.2 图访问

实现图空间的信息访问,进入后,可进行图的多维查询分析、元数据管理、数据导入、算法分析等操作。

image
4.1.3 图管理
  1. 用户通过对图的概览、搜索以及单图的信息编辑与删除,实现图的统一管理。
  2. 搜索范围:可对图名称和 ID 进行搜索。
image

4.2 元数据建模(列表 + 图模式)

4.2.1 模块入口

左侧导航处:

image
4.2.2 属性类型
4.2.2.1 创建
  1. 填写或选择属性名称、数据类型、基数,完成属性的创建。
  2. 创建的属性可作为顶点类型和边类型的属性。

列表模式:

image

图模式:

image
4.2.2.2 复用
  1. 平台提供【复用】功能,可直接复用其他图的元数据。
  2. 选择需要复用的图 ID,继续选择需要复用的属性,之后平台会进行是否冲突的校验,通过后,可实现元数据的复用。

选择复用项:

image

校验复用项:

image
4.2.2.3 管理
  1. 在属性列表中可进行单条删除或批量删除操作。
4.2.3 顶点类型
4.2.3.1 创建
  1. 填写或选择顶点类型名称、ID 策略、关联属性、主键属性,顶点样式、查询结果中顶点下方展示的内容,以及索引的信息:包括是否创建类型索引,及属性索引的具体内容,完成顶点类型的创建。

列表模式:

image

图模式:

image
4.2.3.2 复用
  1. 顶点类型的复用,会将此类型关联的属性和属性索引一并复用。
  2. 复用功能使用方法类似属性的复用,见 3.2.2.2。
4.2.3.3 管理
  1. 可进行编辑操作,顶点样式、关联类型、顶点展示内容、属性索引可编辑,其余不可编辑。

  2. 可进行单条删除或批量删除操作。

image
4.2.4 边类型
4.2.4.1 创建
  1. 填写或选择边类型名称、起点类型、终点类型、关联属性、是否允许多次连接、边样式、查询结果中边下方展示的内容,以及索引的信息:包括是否创建类型索引,及属性索引的具体内容,完成边类型的创建。

列表模式:

image

图模式:

image
4.2.4.2 复用
  1. 边类型的复用,会将此类型的起点类型、终点类型、关联的属性和属性索引一并复用。
  2. 复用功能使用方法类似属性的复用,见 3.2.2.2。
4.2.4.3 管理
  1. 可进行编辑操作,边样式、关联属性、边展示内容、属性索引可编辑,其余不可编辑,同顶点类型。
  2. 可进行单条删除或批量删除操作。
4.2.5 索引类型

展示顶点类型和边类型的顶点索引和边索引。

4.3 数据导入

数据导入的使用流程如下:

image
4.3.1 模块入口

左侧导航处:

image
4.3.2 创建任务
  1. 填写任务名称和备注(非必填),可以创建导入任务。
  2. 可创建多个导入任务,并行导入。
image
4.3.3 上传文件
  1. 上传需要构图的文件,目前支持的格式为 CSV,后续会不断更新。
  2. 可同时上传多个文件。
image
4.3.4 设置数据映射
  1. 对上传的文件分别设置数据映射,包括文件设置和类型设置

  2. 文件设置:勾选或填写是否包含表头、分隔符、编码格式等文件本身的设置内容,均设置默认值,无需手动填写

  3. 类型设置:

    1. 顶点映射和边映射:

      【顶点类型】 :选择顶点类型,并为其 ID 映射上传文件中列数据;

      【边类型】:选择边类型,为其起点类型和终点类型的 ID 列映射上传文件的列数据;

    2. 映射设置:为选定的顶点类型的属性映射上传文件中的列数据,此处,若属性名称与文件的表头名称一致,可自动匹配映射属性,无需手动填选

    3. 完成设置后,显示设置列表,方可进行下一步操作,支持映射的新增、编辑、删除操作

设置映射的填写内容:

image

映射列表:

image
4.3.5 导入数据

导入前需要填写导入设置参数,填写完成后,可开始向图库中导入数据

  1. 导入设置
  • 导入设置参数项如下图所示,均设置默认值,无需手动填写
image
  1. 导入详情
  • 点击开始导入,开始文件的导入任务
  • 导入详情中提供每个上传文件设置的映射类型、导入速度、导入的进度、耗时以及当前任务的具体状态,并可对每个任务进行暂停、继续、停止等操作
  • 若导入失败,可查看具体原因
image

4.4 数据分析

4.4.1 模块入口

左侧导航处:

image
4.4.2 多图切换

通过左侧切换入口,灵活切换多图的操作空间

image
4.4.3 图分析与处理

HugeGraph 支持 Apache TinkerPop3 的图遍历查询语言 Gremlin,Gremlin 是一种通用的图数据库查询语言,通过输入 Gremlin 语句,点击执行,即可执行图数据的查询分析操作,并可实现顶点/边的创建及删除、顶点/边的属性修改等。

Gremlin 查询后,下方为图结果展示区域,提供 3 种图结果展示方式,分别为:【图模式】、【表格模式】、【Json 模式】。

支持缩放、居中、全屏、导出等操作。

【图模式】

image

【表格模式】

image

【Json 模式】

image
4.4.4 数据详情

点击顶点/边实体,可查看顶点/边的数据详情,包括:顶点/边类型,顶点 ID,属性及对应值,拓展图的信息展示维度,提高易用性。

4.4.5 图结果的多维路径查询

除了全局的查询外,可针对查询结果中的顶点进行深度定制化查询以及隐藏操作,实现图结果的定制化挖掘。

右击顶点,出现顶点的菜单入口,可进行展示、查询、隐藏等操作。

  • 展开:点击后,展示与选中点关联的顶点。
  • 查询:通过选择与选中点关联的边类型及边方向,在此条件下,再选择其属性及相应筛选规则,可实现定制化的路径展示。
  • 隐藏:点击后,隐藏选中点及与之关联的边。

双击顶点,也可展示与选中点关联的顶点。

image
4.4.6 新增顶点/边
4.4.6.1 新增顶点

在图区可通过两个入口,动态新增顶点,如下:

  1. 点击图区面板,出现添加顶点入口
  2. 点击右上角的操作栏中的首个图标

通过选择或填写顶点类型、ID 值、属性信息,完成顶点的增加。

入口如下:

image

添加顶点内容如下:

image
4.4.6.2 新增边

右击图结果中的顶点,可增加该点的出边或者入边。

4.4.7 执行记录与收藏的查询
  1. 图区下方记载每次查询记录,包括:查询时间、执行类型、内容、状态、耗时、以及【收藏】和【加载】操作,实现图执行的全方位记录,有迹可循,并可对执行内容快速加载复用
  2. 提供语句的收藏功能,可对常用语句进行收藏操作,方便高频语句快速调用
image

4.5 任务管理

4.5.1 模块入口

左侧导航处:

image
4.5.2 任务管理
  1. 提供异步任务的统一的管理与结果查看,异步任务包括 4 类,分别为:
  • gremlin:Gremlin 任务务
  • algorithm:OLAP 算法任务务
  • remove_schema:删除元数据
  • rebuild_index:重建索引
  1. 列表显示当前图的异步任务信息,包括:任务 ID,任务名称,任务类型,创建时间,耗时,状态,操作,实现对异步任务的管理。
  2. 支持对任务类型和状态进行筛选
  3. 支持搜索任务 ID 和任务名称
  4. 可对异步任务进行删除或批量删除操作
image
4.5.3 Gremlin 异步任务

1.创建任务

  • 数据分析模块,目前支持两种 Gremlin 操作,Gremlin 查询和 Gremlin 任务;若用户切换到 Gremlin 任务,点击执行后,在异步任务中心会建立一条异步任务; +

3 平台使用流程

平台的模块使用流程如下:

image

4 平台使用说明

4.1 图管理

4.1.1 图创建

图管理模块下,点击【创建图】,通过填写图 ID、图名称、主机名、端口号、用户名、密码的信息,实现多图的连接。

image

创建图填写内容如下:

image

注意:如果使用 docker 启动 hubble,且 serverhubble 位于同一宿主机,不能直接使用 localhost/127.0.0.1 作为主机名。如果 hubbleserver 在同一 docker 网络下,则可以直接使用 container_name 作为主机名,端口则为 8080。或者也可以使用宿主机 ip 作为主机名,此时端口为宿主机为 server 配置的端口

4.1.2 图访问

实现图空间的信息访问,进入后,可进行图的多维查询分析、元数据管理、数据导入、算法分析等操作。

image
4.1.3 图管理
  1. 用户通过对图的概览、搜索以及单图的信息编辑与删除,实现图的统一管理。
  2. 搜索范围:可对图名称和 ID 进行搜索。
image

4.2 元数据建模(列表 + 图模式)

4.2.1 模块入口

左侧导航处:

image
4.2.2 属性类型
4.2.2.1 创建
  1. 填写或选择属性名称、数据类型、基数,完成属性的创建。
  2. 创建的属性可作为顶点类型和边类型的属性。

列表模式:

image

图模式:

image
4.2.2.2 复用
  1. 平台提供【复用】功能,可直接复用其他图的元数据。
  2. 选择需要复用的图 ID,继续选择需要复用的属性,之后平台会进行是否冲突的校验,通过后,可实现元数据的复用。

选择复用项:

image

校验复用项:

image
4.2.2.3 管理
  1. 在属性列表中可进行单条删除或批量删除操作。
4.2.3 顶点类型
4.2.3.1 创建
  1. 填写或选择顶点类型名称、ID 策略、关联属性、主键属性,顶点样式、查询结果中顶点下方展示的内容,以及索引的信息:包括是否创建类型索引,及属性索引的具体内容,完成顶点类型的创建。

列表模式:

image

图模式:

image
4.2.3.2 复用
  1. 顶点类型的复用,会将此类型关联的属性和属性索引一并复用。
  2. 复用功能使用方法类似属性的复用,见 3.2.2.2。
4.2.3.3 管理
  1. 可进行编辑操作,顶点样式、关联类型、顶点展示内容、属性索引可编辑,其余不可编辑。

  2. 可进行单条删除或批量删除操作。

image
4.2.4 边类型
4.2.4.1 创建
  1. 填写或选择边类型名称、起点类型、终点类型、关联属性、是否允许多次连接、边样式、查询结果中边下方展示的内容,以及索引的信息:包括是否创建类型索引,及属性索引的具体内容,完成边类型的创建。

列表模式:

image

图模式:

image
4.2.4.2 复用
  1. 边类型的复用,会将此类型的起点类型、终点类型、关联的属性和属性索引一并复用。
  2. 复用功能使用方法类似属性的复用,见 3.2.2.2。
4.2.4.3 管理
  1. 可进行编辑操作,边样式、关联属性、边展示内容、属性索引可编辑,其余不可编辑,同顶点类型。
  2. 可进行单条删除或批量删除操作。
4.2.5 索引类型

展示顶点类型和边类型的顶点索引和边索引。

4.3 数据导入

注意:目前推荐使用 hugegraph-loader 进行正式数据导入, hubble 内置的导入用来做测试简单上手

数据导入的使用流程如下:

image
4.3.1 模块入口

左侧导航处:

image
4.3.2 创建任务
  1. 填写任务名称和备注(非必填),可以创建导入任务。
  2. 可创建多个导入任务,并行导入。
image
4.3.3 上传文件
  1. 上传需要构图的文件,目前支持的格式为 CSV,后续会不断更新。
  2. 可同时上传多个文件。
image
4.3.4 设置数据映射
  1. 对上传的文件分别设置数据映射,包括文件设置和类型设置

  2. 文件设置:勾选或填写是否包含表头、分隔符、编码格式等文件本身的设置内容,均设置默认值,无需手动填写

  3. 类型设置:

    1. 顶点映射和边映射:

      【顶点类型】 :选择顶点类型,并为其 ID 映射上传文件中列数据;

      【边类型】:选择边类型,为其起点类型和终点类型的 ID 列映射上传文件的列数据;

    2. 映射设置:为选定的顶点类型的属性映射上传文件中的列数据,此处,若属性名称与文件的表头名称一致,可自动匹配映射属性,无需手动填选

    3. 完成设置后,显示设置列表,方可进行下一步操作,支持映射的新增、编辑、删除操作

设置映射的填写内容:

image

映射列表:

image
4.3.5 导入数据

导入前需要填写导入设置参数,填写完成后,可开始向图库中导入数据

  1. 导入设置
  • 导入设置参数项如下图所示,均设置默认值,无需手动填写
image
  1. 导入详情
  • 点击开始导入,开始文件的导入任务
  • 导入详情中提供每个上传文件设置的映射类型、导入速度、导入的进度、耗时以及当前任务的具体状态,并可对每个任务进行暂停、继续、停止等操作
  • 若导入失败,可查看具体原因
image

4.4 数据分析

4.4.1 模块入口

左侧导航处:

image
4.4.2 多图切换

通过左侧切换入口,灵活切换多图的操作空间

image
4.4.3 图分析与处理

HugeGraph 支持 Apache TinkerPop3 的图遍历查询语言 Gremlin,Gremlin 是一种通用的图数据库查询语言,通过输入 Gremlin 语句,点击执行,即可执行图数据的查询分析操作,并可实现顶点/边的创建及删除、顶点/边的属性修改等。

Gremlin 查询后,下方为图结果展示区域,提供 3 种图结果展示方式,分别为:【图模式】、【表格模式】、【Json 模式】。

支持缩放、居中、全屏、导出等操作。

【图模式】

image

【表格模式】

image

【Json 模式】

image
4.4.4 数据详情

点击顶点/边实体,可查看顶点/边的数据详情,包括:顶点/边类型,顶点 ID,属性及对应值,拓展图的信息展示维度,提高易用性。

4.4.5 图结果的多维路径查询

除了全局的查询外,可针对查询结果中的顶点进行深度定制化查询以及隐藏操作,实现图结果的定制化挖掘。

右击顶点,出现顶点的菜单入口,可进行展示、查询、隐藏等操作。

  • 展开:点击后,展示与选中点关联的顶点。
  • 查询:通过选择与选中点关联的边类型及边方向,在此条件下,再选择其属性及相应筛选规则,可实现定制化的路径展示。
  • 隐藏:点击后,隐藏选中点及与之关联的边。

双击顶点,也可展示与选中点关联的顶点。

image
4.4.6 新增顶点/边
4.4.6.1 新增顶点

在图区可通过两个入口,动态新增顶点,如下:

  1. 点击图区面板,出现添加顶点入口
  2. 点击右上角的操作栏中的首个图标

通过选择或填写顶点类型、ID 值、属性信息,完成顶点的增加。

入口如下:

image

添加顶点内容如下:

image
4.4.6.2 新增边

右击图结果中的顶点,可增加该点的出边或者入边。

4.4.7 执行记录与收藏的查询
  1. 图区下方记载每次查询记录,包括:查询时间、执行类型、内容、状态、耗时、以及【收藏】和【加载】操作,实现图执行的全方位记录,有迹可循,并可对执行内容快速加载复用
  2. 提供语句的收藏功能,可对常用语句进行收藏操作,方便高频语句快速调用
image

4.5 任务管理

4.5.1 模块入口

左侧导航处:

image
4.5.2 任务管理
  1. 提供异步任务的统一的管理与结果查看,异步任务包括 4 类,分别为:
  • gremlin:Gremlin 任务务
  • algorithm:OLAP 算法任务务
  • remove_schema:删除元数据
  • rebuild_index:重建索引
  1. 列表显示当前图的异步任务信息,包括:任务 ID,任务名称,任务类型,创建时间,耗时,状态,操作,实现对异步任务的管理。
  2. 支持对任务类型和状态进行筛选
  3. 支持搜索任务 ID 和任务名称
  4. 可对异步任务进行删除或批量删除操作
image
4.5.3 Gremlin 异步任务

1.创建任务

  • 数据分析模块,目前支持两种 Gremlin 操作,Gremlin 查询和 Gremlin 任务;若用户切换到 Gremlin 任务,点击执行后,在异步任务中心会建立一条异步任务; 2.任务提交
  • 任务提交成功后,图区部分返回提交结果和任务 ID -3.任务详情
  • 提供【查看】入口,可跳转到任务详情查看当前任务具体执行情况跳转到任务中心后,直接显示当前执行的任务行
image

点击查看入口,跳转到任务管理列表,如下:

image

4.查看结果

  • 结果通过 json 形式展示
4.5.4 OLAP 算法任务

Hubble 上暂未提供可视化的 OLAP 算法执行,可调用 RESTful API 进行 OLAP 类算法任务,在任务管理中通过 ID 找到相应任务,查看进度与结果等。

4.5.5 删除元数据、重建索引

1.创建任务

  • 在元数据建模模块中,删除元数据时,可建立删除元数据的异步任务
image
  • 在编辑已有的顶点/边类型操作中,新增索引时,可建立创建索引的异步任务
image

2.任务详情

  • 确认/保存后,可跳转到任务中心查看当前任务的详情
image

+3.任务详情
  • 提供【查看】入口,可跳转到任务详情查看当前任务具体执行情况跳转到任务中心后,直接显示当前执行的任务行
  • image

    点击查看入口,跳转到任务管理列表,如下:

    image

    4.查看结果

    • 结果通过 json 形式展示
    4.5.4 OLAP 算法任务

    Hubble 上暂未提供可视化的 OLAP 算法执行,可调用 RESTful API 进行 OLAP 类算法任务,在任务管理中通过 ID 找到相应任务,查看进度与结果等。

    4.5.5 删除元数据、重建索引

    1.创建任务

    • 在元数据建模模块中,删除元数据时,可建立删除元数据的异步任务
    image
    • 在编辑已有的顶点/边类型操作中,新增索引时,可建立创建索引的异步任务
    image

    2.任务详情

    • 确认/保存后,可跳转到任务中心查看当前任务的详情
    image

    diff --git a/cn/docs/quickstart/hugegraph-server/index.html b/cn/docs/quickstart/hugegraph-server/index.html index 7a142109d..a65fb0570 100644 --- a/cn/docs/quickstart/hugegraph-server/index.html +++ b/cn/docs/quickstart/hugegraph-server/index.html @@ -6,13 +6,13 @@ 2 依赖 2.1 安装 Java 11 (JDK 11) 请优先考虑在 Java 11 的环境上启动 HugeGraph-Server,目前同时保留对 Java 8 的兼容 在往下阅读之前务必执行 java -version 命令查看 jdk 版本 java -version 3 部署 有四种方式可以部署 HugeGraph-Server 组件: -方式 1:使用 Docker 容器 (推荐) 方式 2:下载 tar 包 方式 3:源码编译 方式 4:使用 tools 工具部署 (Outdated) 3."> Create documentation issue Create project issue Print entire section

    HugeGraph-Server Quick Start

    1 HugeGraph-Server 概述

    HugeGraph-Server 是 HugeGraph 项目的核心部分,包含 Core、Backend、API 等子模块。

    Core 模块是 Tinkerpop 接口的实现,Backend 模块用于管理数据存储,目前支持的后端包括:Memory、Cassandra、ScyllaDB 以及 RocksDB,API 模块提供 HTTP Server,将 Client 的 HTTP 请求转化为对 Core 的调用。

    文档中会大量出现 HugeGraph-ServerHugeGraphServer 这两种写法,其他组件也类似。这两种写法含义上并无大的差异,可以这么区分:HugeGraph-Server 表示服务端相关组件代码,HugeGraphServer 表示服务进程。

    2 依赖

    2.1 安装 Java 11 (JDK 11)

    请优先考虑在 Java 11 的环境上启动 HugeGraph-Server,目前同时保留对 Java 8 的兼容

    在往下阅读之前务必执行 java -version 命令查看 jdk 版本

    java -version
    -

    3 部署

    有四种方式可以部署 HugeGraph-Server 组件:

    • 方式 1:使用 Docker 容器 (推荐)
    • 方式 2:下载 tar 包
    • 方式 3:源码编译
    • 方式 4:使用 tools 工具部署 (Outdated)

    3.1 使用 Docker 容器 (推荐)

    可参考 Docker 部署方式

    我们可以使用 docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph 去快速启动一个内置了 RocksDBHugegraph server.

    可选项:

    1. 可以使用 docker exec -it graph bash 进入容器完成一些操作
    2. 可以使用 docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph 在启动的时候预加载一个 内置的样例图。

    另外,我们也可以使用 docker-compose完成部署,使用用 docker-compose up -d, 以下是一个样例的 docker-compose.yml:

    version: '3'
    +

    3 部署

    有四种方式可以部署 HugeGraph-Server 组件:

    • 方式 1:使用 Docker 容器 (推荐)
    • 方式 2:下载 tar 包
    • 方式 3:源码编译
    • 方式 4:使用 tools 工具部署 (Outdated)

    3.1 使用 Docker 容器 (推荐)

    可参考 Docker 部署方式

    我们可以使用 docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph 去快速启动一个内置了 RocksDBHugegraph server.

    可选项:

    1. 可以使用 docker exec -it graph bash 进入容器完成一些操作
    2. 可以使用 docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph 在启动的时候预加载一个内置的样例图。可以通过 RESTful API 进行验证。具体步骤可以参考 5.1.1

    另外,如果我们希望能够在一个文件中管理除了 server 之外的其他 Hugegraph 相关的实例,我们也可以使用 docker-compose完成部署,使用命令 docker-compose up -d,(当然只配置 server 也是可以的)以下是一个样例的 docker-compose.yml:

    version: '3'
     services:
       graph:
         image: hugegraph/hugegraph
    -    #environment:
    +    # environment:
         #  - PRELOAD=true
    +    # PRELOAD 为可选参数,为 True 时可以在启动的时候预加载一个内置的样例图
         ports:
    -      - 18080:8080
    +      - 8080:8080
     

    3.2 下载 tar 包

    # use the latest version, here is 1.0.0 for example
     wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz
     tar zxf *hugegraph*.tar.gz
    @@ -65,23 +66,25 @@
     # enter the tool's package
     cd *hugegraph*/*tool* 
     

    注:${version} 为版本号,最新版本号可参考 Download 页面,或直接从 Download 页面点击链接下载

    HugeGraph-Tools 的总入口脚本是 bin/hugegraph,用户可以使用 help 子命令查看其用法,这里只介绍一键部署的命令。

    bin/hugegraph deploy -v {hugegraph-version} -p {install-path} [-u {download-path-prefix}]
    -

    {hugegraph-version} 表示要部署的 HugeGraphServer 及 HugeGraphStudio 的版本,用户可查看 conf/version-mapping.yaml 文件获取版本信息,{install-path} 指定 HugeGraphServer 及 HugeGraphStudio 的安装目录,{download-path-prefix} 可选,指定 HugeGraphServer 及 HugeGraphStudio tar 包的下载地址,不提供时使用默认下载地址,比如要启动 0.6 版本的 HugeGraph-Server 及 HugeGraphStudio 将上述命令写为 bin/hugegraph deploy -v 0.6 -p services 即可。

    4 配置

    如果需要快速启动 HugeGraph 仅用于测试,那么只需要进行少数几个配置项的修改即可(见下一节)。

    详细的配置介绍请参考配置文档配置项介绍

    5 启动

    5.1 使用 Docker

    3.1 使用 Docker 容器中,我们已经介绍了 如何使用 docker 部署 hugegraph-server, 我们还可以设置参数在 sever 启动的时候加载样例图

    5.1.1 启动 server 的时候创建示例图

    在 docker 启动的时候设置环境变量 PRELOAD=true, 从而实现启动脚本的时候加载数据。

    1. 使用docker run

      使用 docker run -itd --name=graph -p 18080:8080 -e PRELOAD=true hugegraph/hugegraph:latest

    2. 使用docker-compose

      创建docker-compose.yml,具体文件如下

      version: '3'
      +

      {hugegraph-version} 表示要部署的 HugeGraphServer 及 HugeGraphStudio 的版本,用户可查看 conf/version-mapping.yaml 文件获取版本信息,{install-path} 指定 HugeGraphServer 及 HugeGraphStudio 的安装目录,{download-path-prefix} 可选,指定 HugeGraphServer 及 HugeGraphStudio tar 包的下载地址,不提供时使用默认下载地址,比如要启动 0.6 版本的 HugeGraph-Server 及 HugeGraphStudio 将上述命令写为 bin/hugegraph deploy -v 0.6 -p services 即可。

      4 配置

      如果需要快速启动 HugeGraph 仅用于测试,那么只需要进行少数几个配置项的修改即可(见下一节)。

      详细的配置介绍请参考配置文档配置项介绍

      5 启动

      5.1 使用 Docker

      3.1 使用 Docker 容器中,我们已经介绍了如何使用 docker 部署 hugegraph-server, 我们还可以设置参数在 sever 启动的时候加载样例图

      5.1.1 启动 server 的时候创建示例图

      在 docker 启动的时候设置环境变量 PRELOAD=true, 从而实现启动脚本的时候加载数据。

      1. 使用docker run

        使用 docker run -itd --name=graph -p 8080:8080 -e PRELOAD=true hugegraph/hugegraph:latest

      2. 使用docker-compose

        创建docker-compose.yml,具体文件如下,在环境变量中设置 PRELOAD=true。其中,example.groovy 是一个预定义的脚本,用于预加载样例数据。如果有需要,可以通过挂载新的 example.groovy 脚本改变预加载的数据。

        version: '3'
           services:
             graph:
               image: hugegraph/hugegraph:latest
               container_name: graph
               environment:
                 - PRELOAD=true
        +      volumes:
        +        - /path/to/yourscript:/hugegraph/scripts/example.groovy
               ports:
        -        - 18080:8080
        -

        使用命令 docker-compose up -d 启动容器

      使用 RESTful API 请求 HugeGraphServer 得到如下结果:

      > curl "http://localhost:18080/graphs/hugegraph/graph/vertices" | gunzip
      +        - 8080:8080
      +

      使用命令 docker-compose up -d 启动容器

    使用 RESTful API 请求 HugeGraphServer 得到如下结果:

    > curl "http://localhost:8080/graphs/hugegraph/graph/vertices" | gunzip
     
     {"vertices":[{"id":"2:lop","label":"software","type":"vertex","properties":{"name":"lop","lang":"java","price":328}},{"id":"1:josh","label":"person","type":"vertex","properties":{"name":"josh","age":32,"city":"Beijing"}},{"id":"1:marko","label":"person","type":"vertex","properties":{"name":"marko","age":29,"city":"Beijing"}},{"id":"1:peter","label":"person","type":"vertex","properties":{"name":"peter","age":35,"city":"Shanghai"}},{"id":"1:vadas","label":"person","type":"vertex","properties":{"name":"vadas","age":27,"city":"Hongkong"}},{"id":"2:ripple","label":"software","type":"vertex","properties":{"name":"ripple","lang":"java","price":199}}]}
     

    代表创建示例图成功。

    5.2 使用启动脚本启动

    启动分为"首次启动"和"非首次启动",这么区分是因为在第一次启动前需要初始化后端数据库,然后启动服务。

    而在人为停掉服务后,或者其他原因需要再次启动服务时,因为后端数据库是持久化存在的,直接启动服务即可。

    HugeGraphServer 启动时会连接后端存储并尝试检查后端存储版本号,如果未初始化后端或者后端已初始化但版本不匹配时(旧版本数据),HugeGraphServer 会启动失败,并给出错误信息。

    如果需要外部访问 HugeGraphServer,请修改 rest-server.propertiesrestserver.url 配置项(默认为 http://127.0.0.1:8080),修改成机器名或 IP 地址。

    由于各种后端所需的配置(hugegraph.properties)及启动步骤略有不同,下面逐一对各后端的配置及启动做介绍。

    5.2.1 RocksDB
    点击展开/折叠 RocksDB 配置及启动方法

    RocksDB 是一个嵌入式的数据库,不需要手动安装部署,要求 GCC 版本 >= 4.3.0(GLIBCXX_3.4.10),如不满足,需要提前升级 GCC

    修改 hugegraph.properties

    backend=rocksdb
     serializer=binary
     rocksdb.data_path=.
     rocksdb.wal_path=.
    -

    初始化数据库(仅第一次启动时需要)

    cd *hugegraph-${version}
    +

    初始化数据库(第一次启动时或在 conf/graphs/ 下手动添加了新配置时需要进行初始化)

    cd *hugegraph-${version}
     bin/init-store.sh
     

    启动 server

    bin/start-hugegraph.sh
     Starting HugeGraphServer...
    @@ -97,7 +100,7 @@
     #hbase.enable_partition=true
     #hbase.vertex_partitions=10
     #hbase.edge_partitions=30
    -

    初始化数据库(仅第一次启动时需要)

    cd *hugegraph-${version}
    +

    初始化数据库(第一次启动时或在 conf/graphs/ 下手动添加了新配置时需要进行初始化)

    cd *hugegraph-${version}
     bin/init-store.sh
     

    启动 server

    bin/start-hugegraph.sh
     Starting HugeGraphServer...
    @@ -115,7 +118,7 @@
     jdbc.reconnect_max_times=3
     jdbc.reconnect_interval=3
     jdbc.ssl_mode=false
    -

    初始化数据库(仅第一次启动时需要)

    cd *hugegraph-${version}
    +

    初始化数据库(第一次启动时或在 conf/graphs/ 下手动添加了新配置时需要进行初始化)

    cd *hugegraph-${version}
     bin/init-store.sh
     

    启动 server

    bin/start-hugegraph.sh
     Starting HugeGraphServer...
    @@ -133,7 +136,7 @@
     
     #cassandra.keyspace.strategy=SimpleStrategy
     #cassandra.keyspace.replication=3
    -

    初始化数据库(仅第一次启动时需要)

    cd *hugegraph-${version}
    +

    初始化数据库(第一次启动时或在 conf/graphs/ 下手动添加了新配置时需要进行初始化)

    cd *hugegraph-${version}
     bin/init-store.sh
     Initing HugeGraph Store...
     2017-12-01 11:26:51 1424  [main] [INFO ] org.apache.hugegraph.HugeGraph [] - Opening backend store: 'cassandra'
    @@ -173,7 +176,7 @@
     
     #cassandra.keyspace.strategy=SimpleStrategy
     #cassandra.keyspace.replication=3
    -

    由于 scylladb 数据库本身就是基于 cassandra 的"优化版",如果用户未安装 scylladb,也可以直接使用 cassandra 作为后端存储,只需要把 backend 和 serializer 修改为 scylladb,host 和 post 指向 cassandra 集群的 seeds 和 port 即可,但是并不建议这样做,这样发挥不出 scylladb 本身的优势了。

    初始化数据库(仅第一次启动时需要)

    cd *hugegraph-${version}
    +

    由于 scylladb 数据库本身就是基于 cassandra 的"优化版",如果用户未安装 scylladb,也可以直接使用 cassandra 作为后端存储,只需要把 backend 和 serializer 修改为 scylladb,host 和 post 指向 cassandra 集群的 seeds 和 port 即可,但是并不建议这样做,这样发挥不出 scylladb 本身的优势了。

    初始化数据库(第一次启动时或在 conf/graphs/ 下手动添加了新配置时需要进行初始化)

    cd *hugegraph-${version}
     bin/init-store.sh
     

    启动 server

    bin/start-hugegraph.sh
     Starting HugeGraphServer...
    @@ -241,9 +244,9 @@
             ...
         ]
     }
    -

    详细的 API 请参考 RESTful-API 文档

    7 停止 Server

    $cd *hugegraph-${version}
    +

    详细的 API 请参考 RESTful-API 文档。

    另外也可以通过访问 localhost:8080/swagger-ui/index.html 查看 API。

    image

    7 停止 Server

    $cd *hugegraph-${version}
     $bin/stop-hugegraph.sh
    -

    8 使用 IntelliJ IDEA 调试 Server

    请参考在 IDEA 中配置 Server 开发环境


    +

    8 使用 IntelliJ IDEA 调试 Server

    请参考在 IDEA 中配置 Server 开发环境


    diff --git a/cn/docs/quickstart/index.xml b/cn/docs/quickstart/index.xml index 8c6934234..3f64d5bed 100644 --- a/cn/docs/quickstart/index.xml +++ b/cn/docs/quickstart/index.xml @@ -25,17 +25,18 @@ <p>可选项:</p> <ol> <li>可以使用 <code>docker exec -it graph bash</code> 进入容器完成一些操作</li> -<li>可以使用 <code>docker run -itd --name=graph -p 8080:8080 -e PRELOAD=&quot;true&quot; hugegraph/hugegraph</code> 在启动的时候预加载一个 <strong>内置的</strong>样例图。</li> +<li>可以使用 <code>docker run -itd --name=graph -p 8080:8080 -e PRELOAD=&quot;true&quot; hugegraph/hugegraph</code> 在启动的时候预加载一个<strong>内置的</strong>样例图。可以通过 <code>RESTful API</code> 进行验证。具体步骤可以参考 <a href="/cn/docs/quickstart/hugegraph-server/#511-%E5%90%AF%E5%8A%A8-server-%E7%9A%84%E6%97%B6%E5%80%99%E5%88%9B%E5%BB%BA%E7%A4%BA%E4%BE%8B%E5%9B%BE">5.1.1</a></li> </ol> -<p>另外,我们也可以使用 <code>docker-compose</code>完成部署,使用用 <code>docker-compose up -d</code>, 以下是一个样例的 <code>docker-compose.yml</code>:</p> +<p>另外,如果我们希望能够在一个文件中管理除了 <code>server</code> 之外的其他 Hugegraph 相关的实例,我们也可以使用 <code>docker-compose</code>完成部署,使用命令 <code>docker-compose up -d</code>,(当然只配置 <code>server</code> 也是可以的)以下是一个样例的 <code>docker-compose.yml</code>:</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#204a87;font-weight:bold">version</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#4e9a06">&#39;3&#39;</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"></span><span style="color:#204a87;font-weight:bold">services</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">graph</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">image</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">hugegraph/hugegraph</span><span style="color:#f8f8f8;text-decoration:underline"> -</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic">#environment:</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic"># environment:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic"># - PRELOAD=true</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic"># PRELOAD 为可选参数,为 True 时可以在启动的时候预加载一个内置的样例图</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">ports</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> -</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">18080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span></code></pre></div><h4 id="32-下载-tar-包">3.2 下载 tar 包</h4> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#8f5902;font-style:italic"># use the latest version, here is 1.0.0 for example</span> </span></span><span style="display:flex;"><span>wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz @@ -87,17 +88,17 @@ <p>详细的配置介绍请参考<a href="/docs/config/config-guide">配置文档</a>及<a href="/docs/config/config-option">配置项介绍</a>。</p> <h3 id="5-启动">5 启动</h3> <h4 id="51-使用-docker">5.1 使用 Docker</h4> -<p>在 <a href="#31-%E4%BD%BF%E7%94%A8-docker-%E5%AE%B9%E5%99%A8-%E6%8E%A8%E8%8D%90">3.1 使用 Docker 容器</a>中,我们已经介绍了 如何使用 <code>docker</code> 部署 <code>hugegraph-server</code>, 我们还可以设置参数在 sever 启动的时候加载样例图</p> +<p>在 <a href="#31-%E4%BD%BF%E7%94%A8-docker-%E5%AE%B9%E5%99%A8-%E6%8E%A8%E8%8D%90">3.1 使用 Docker 容器</a>中,我们已经介绍了如何使用 <code>docker</code> 部署 <code>hugegraph-server</code>, 我们还可以设置参数在 sever 启动的时候加载样例图</p> <h5 id="511-启动-server-的时候创建示例图">5.1.1 启动 server 的时候创建示例图</h5> <p>在 docker 启动的时候设置环境变量 <code>PRELOAD=true</code>, 从而实现启动脚本的时候加载数据。</p> <ol> <li> <p>使用<code>docker run</code></p> -<p>使用 <code>docker run -itd --name=graph -p 18080:8080 -e PRELOAD=true hugegraph/hugegraph:latest</code></p> +<p>使用 <code>docker run -itd --name=graph -p 8080:8080 -e PRELOAD=true hugegraph/hugegraph:latest</code></p> </li> <li> <p>使用<code>docker-compose</code></p> -<p>创建<code>docker-compose.yml</code>,具体文件如下</p> +<p>创建<code>docker-compose.yml</code>,具体文件如下,在环境变量中设置 PRELOAD=true。其中,<a href="https://github.com/apache/incubator-hugegraph/blob/master/hugegraph-dist/src/assembly/static/scripts/example.groovy"><code>example.groovy</code></a> 是一个预定义的脚本,用于预加载样例数据。如果有需要,可以通过挂载新的 <code>example.groovy</code> 脚本改变预加载的数据。</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#204a87;font-weight:bold">version</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#4e9a06">&#39;3&#39;</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">services</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">graph</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> @@ -105,13 +106,15 @@ </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">container_name</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">graph</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">environment</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#000">PRELOAD=true</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">volumes</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#000">/path/to/yourscript:/hugegraph/scripts/example.groovy</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">ports</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> -</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">18080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span></code></pre></div><p>使用命令 <code>docker-compose up -d</code> 启动容器</p> </li> </ol> <p>使用 RESTful API 请求 <code>HugeGraphServer</code> 得到如下结果:</p> -<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-javascript" data-lang="javascript"><span style="display:flex;"><span><span style="color:#ce5c00;font-weight:bold">&gt;</span> <span style="color:#000">curl</span> <span style="color:#4e9a06">&#34;http://localhost:18080/graphs/hugegraph/graph/vertices&#34;</span> <span style="color:#ce5c00;font-weight:bold">|</span> <span style="color:#000">gunzip</span> +<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-javascript" data-lang="javascript"><span style="display:flex;"><span><span style="color:#ce5c00;font-weight:bold">&gt;</span> <span style="color:#000">curl</span> <span style="color:#4e9a06">&#34;http://localhost:8080/graphs/hugegraph/graph/vertices&#34;</span> <span style="color:#ce5c00;font-weight:bold">|</span> <span style="color:#000">gunzip</span> </span></span><span style="display:flex;"><span> </span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">{</span><span style="color:#4e9a06">&#34;vertices&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#000;font-weight:bold">[{</span><span style="color:#4e9a06">&#34;id&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;2:lop&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;label&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;software&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;type&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;vertex&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;properties&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#000;font-weight:bold">{</span><span style="color:#4e9a06">&#34;name&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;lop&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;lang&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;java&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;price&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">328</span><span style="color:#000;font-weight:bold">}},{</span><span style="color:#4e9a06">&#34;id&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;1:josh&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;label&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;person&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;type&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;vertex&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;properties&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#000;font-weight:bold">{</span><span style="color:#4e9a06">&#34;name&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;josh&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;age&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">32</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;city&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;Beijing&#34;</span><span style="color:#000;font-weight:bold">}},{</span><span style="color:#4e9a06">&#34;id&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;1:marko&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;label&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;person&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;type&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;vertex&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;properties&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#000;font-weight:bold">{</span><span style="color:#4e9a06">&#34;name&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;marko&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;age&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">29</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;city&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;Beijing&#34;</span><span style="color:#000;font-weight:bold">}},{</span><span style="color:#4e9a06">&#34;id&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;1:peter&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;label&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;person&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;type&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;vertex&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;properties&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#000;font-weight:bold">{</span><span style="color:#4e9a06">&#34;name&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;peter&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;age&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">35</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;city&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;Shanghai&#34;</span><span style="color:#000;font-weight:bold">}},{</span><span style="color:#4e9a06">&#34;id&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;1:vadas&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;label&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;person&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;type&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;vertex&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;properties&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#000;font-weight:bold">{</span><span style="color:#4e9a06">&#34;name&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;vadas&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;age&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">27</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;city&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;Hongkong&#34;</span><span style="color:#000;font-weight:bold">}},{</span><span style="color:#4e9a06">&#34;id&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;2:ripple&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;label&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;software&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;type&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;vertex&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;properties&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#000;font-weight:bold">{</span><span style="color:#4e9a06">&#34;name&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;ripple&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;lang&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#4e9a06">&#34;java&#34;</span><span style="color:#000;font-weight:bold">,</span><span style="color:#4e9a06">&#34;price&#34;</span><span style="color:#ce5c00;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">199</span><span style="color:#000;font-weight:bold">}}]}</span> </span></span></code></pre></div><p>代表创建示例图成功。</p> @@ -132,7 +135,7 @@ </span></span><span style="display:flex;"><span>serializer=binary </span></span><span style="display:flex;"><span>rocksdb.data_path=. </span></span><span style="display:flex;"><span>rocksdb.wal_path=. -</span></span></code></pre></div><p>初始化数据库(仅第一次启动时需要)</p> +</span></span></code></pre></div><p>初始化数据库(第一次启动时或在 <code>conf/graphs/</code> 下手动添加了新配置时需要进行初始化)</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span>bin/init-store.sh </span></span></code></pre></div><p>启动 server</p> @@ -159,7 +162,7 @@ </span></span><span style="display:flex;"><span>#hbase.enable_partition=true </span></span><span style="display:flex;"><span>#hbase.vertex_partitions=10 </span></span><span style="display:flex;"><span>#hbase.edge_partitions=30 -</span></span></code></pre></div><p>初始化数据库(仅第一次启动时需要)</p> +</span></span></code></pre></div><p>初始化数据库(第一次启动时或在 <code>conf/graphs/</code> 下手动添加了新配置时需要进行初始化)</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span>bin/init-store.sh </span></span></code></pre></div><p>启动 server</p> @@ -191,7 +194,7 @@ </span></span><span style="display:flex;"><span>jdbc.reconnect_max_times=3 </span></span><span style="display:flex;"><span>jdbc.reconnect_interval=3 </span></span><span style="display:flex;"><span>jdbc.ssl_mode=false -</span></span></code></pre></div><p>初始化数据库(仅第一次启动时需要)</p> +</span></span></code></pre></div><p>初始化数据库(第一次启动时或在 <code>conf/graphs/</code> 下手动添加了新配置时需要进行初始化)</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span>bin/init-store.sh </span></span></code></pre></div><p>启动 server</p> @@ -219,7 +222,7 @@ </span></span><span style="display:flex;"><span> </span></span><span style="display:flex;"><span>#cassandra.keyspace.strategy=SimpleStrategy </span></span><span style="display:flex;"><span>#cassandra.keyspace.replication=3 -</span></span></code></pre></div><p>初始化数据库(仅第一次启动时需要)</p> +</span></span></code></pre></div><p>初始化数据库(第一次启动时或在 <code>conf/graphs/</code> 下手动添加了新配置时需要进行初始化)</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span>bin/init-store.sh </span></span><span style="display:flex;"><span>Initing HugeGraph Store... @@ -280,7 +283,7 @@ </span></span><span style="display:flex;"><span>#cassandra.keyspace.strategy=SimpleStrategy </span></span><span style="display:flex;"><span>#cassandra.keyspace.replication=3 </span></span></code></pre></div><p>由于 scylladb 数据库本身就是基于 cassandra 的&quot;优化版&quot;,如果用户未安装 scylladb,也可以直接使用 cassandra 作为后端存储,只需要把 backend 和 serializer 修改为 scylladb,host 和 post 指向 cassandra 集群的 seeds 和 port 即可,但是并不建议这样做,这样发挥不出 scylladb 本身的优势了。</p> -<p>初始化数据库(仅第一次启动时需要)</p> +<p>初始化数据库(第一次启动时或在 <code>conf/graphs/</code> 下手动添加了新配置时需要进行初始化)</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span>bin/init-store.sh </span></span></code></pre></div><p>启动 server</p> @@ -380,7 +383,12 @@ </span></span><span style="display:flex;"><span> <span style="color:#a40000">...</span> </span></span><span style="display:flex;"><span> <span style="color:#000;font-weight:bold">]</span> </span></span><span style="display:flex;"><span><span style="color:#000;font-weight:bold">}</span> -</span></span></code></pre></div><p>详细的 API 请参考 <a href="/docs/clients/restful-api">RESTful-API</a> 文档</p> +</span></span></code></pre></div><p id="swaggerui-example"></p> +<p>详细的 API 请参考 <a href="/docs/clients/restful-api">RESTful-API</a> 文档。</p> +<p>另外也可以通过访问 <code>localhost:8080/swagger-ui/index.html</code> 查看 API。</p> +<div style="text-align: center;"> +<img src="/docs/images/images-server/621swaggerui示例.png" alt="image"> +</div> <h3 id="7-停止-server">7 停止 Server</h3> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#000">$cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span><span style="color:#000">$bin</span>/stop-hugegraph.sh @@ -1441,7 +1449,8 @@ HugeGraph Toolchain 版本:toolchain-1.0.0</p> </ul> <h4 id="21-使用-docker-推荐">2.1 使用 Docker (推荐)</h4> <blockquote> -<p><strong>特别注意</strong>: 如果使用 docker 启动 hubble,且 hubble 和 server 位于同一宿主机,在后续 hubble 页面中设置 graph 的 hostname 的时候请不要直接设置 <code>localhost/127.0.0.1</code>,这将指向 hubble 容器内部而非宿主机,导致无法连接到 server. 如果 hubble 和 server 在同一 docker 网络下,则可以直接使用<code>container_name</code>作为主机名,端口则为 8080. 或者也可以使用宿主机 ip 作为主机名,此时端口号为宿主机为 server 配置的端口</p> +<p><strong>特别注意</strong>: docker 模式下,若 hubble 和 server 在同一宿主机,hubble 页面中设置 graph 的 <code>hostname</code> <strong>不能设置</strong>为 <code>localhost/127.0.0.1</code>,因这会指向 hubble <strong>容器内部</strong>而非宿主机,导致无法连接到 server.</p> +<p>若 hubble 和 server 在同一 docker 网络下,<strong>推荐</strong>直接使用<code>container_name</code> (如下例的 <code>graph</code>) 作为主机名。或者也可以使用 <strong>宿主机 IP</strong> 作为主机名,此时端口号为宿主机给 server 配置的端口</p> </blockquote> <p>我们可以使用 <code>docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble</code> 快速启动 <a href="https://hub.docker.com/r/hugegraph/hubble">hubble</a>.</p> <p>或者使用 docker-compose 启动 hubble,另外如果 hubble 和 graph 在同一个 docker 网络下,可以使用 graph 的 contain_name 进行访问,而不需要宿主机的 ip</p> @@ -1452,7 +1461,7 @@ HugeGraph Toolchain 版本:toolchain-1.0.0</p> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">image</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">hugegraph/hugegraph</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">container_name</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">graph</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">ports</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> -</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">18080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">hubble</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">image</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">hugegraph/hubble</span><span style="color:#f8f8f8;text-decoration:underline"> @@ -1511,6 +1520,9 @@ HugeGraph Toolchain 版本:toolchain-1.0.0</p> <div style="text-align: center;"> <img src="/docs/images/images-hubble/311图创建2.png" alt="image"> </div> +<blockquote> +<p><strong>注意</strong>:如果使用 docker 启动 <code>hubble</code>,且 <code>server</code> 和 <code>hubble</code> 位于同一宿主机,不能直接使用 <code>localhost/127.0.0.1</code> 作为主机名。如果 <code>hubble</code> 和 <code>server</code> 在同一 docker 网络下,则可以直接使用 container_name 作为主机名,端口则为 8080。或者也可以使用宿主机 ip 作为主机名,此时端口为宿主机为 server 配置的端口</p> +</blockquote> <h5 id="412图访问">4.1.2 图访问</h5> <p>实现图空间的信息访问,进入后,可进行图的多维查询分析、元数据管理、数据导入、算法分析等操作。</p> <div style="text-align: center;"> @@ -1617,6 +1629,9 @@ HugeGraph Toolchain 版本:toolchain-1.0.0</p> <h5 id="425索引类型">4.2.5 索引类型</h5> <p>展示顶点类型和边类型的顶点索引和边索引。</p> <h4 id="43数据导入">4.3 数据导入</h4> +<blockquote> +<p><strong>注意</strong>:目前推荐使用 <a href="/cn/docs/quickstart/hugegraph-loader">hugegraph-loader</a> 进行正式数据导入, hubble 内置的导入用来做<strong>测试</strong>和<strong>简单上手</strong></p> +</blockquote> <p>数据导入的使用流程如下:</p> <center> <img src="/docs/images/images-hubble/33导入流程.png" alt="image"> diff --git a/cn/sitemap.xml b/cn/sitemap.xml index ebd4cab2e..e0d002a02 100644 --- a/cn/sitemap.xml +++ b/cn/sitemap.xml @@ -1 +1 @@ -/cn/docs/guides/architectural/2023-06-25T21:06:07+08:00/cn/docs/config/config-guide/2023-09-19T14:14:13+08:00/cn/docs/language/hugegraph-gremlin/2023-01-01T16:16:43+08:00/cn/docs/performance/hugegraph-benchmark-0.5.6/2022-09-15T15:16:23+08:00/cn/docs/quickstart/hugegraph-server/2023-10-09T21:10:07+08:00/cn/docs/introduction/readme/2023-06-18T14:57:33+08:00/cn/docs/changelog/hugegraph-1.0.0-release-notes/2023-01-09T07:41:46+08:00/cn/docs/clients/restful-api/2023-07-31T23:55:30+08:00/cn/docs/clients/restful-api/schema/2023-05-14T19:35:13+08:00/cn/docs/performance/api-preformance/hugegraph-api-0.5.6-rocksdb/2023-01-01T16:16:43+08:00/cn/docs/contribution-guidelines/contribute/2023-09-09T20:50:32+08:00/cn/docs/config/config-option/2023-09-19T14:14:13+08:00/cn/docs/guides/desgin-concept/2022-04-17T11:36:55+08:00/cn/docs/download/download/2023-06-17T14:43:04+08:00/cn/docs/language/hugegraph-example/2023-02-02T01:21:10+08:00/cn/docs/clients/hugegraph-client/2022-09-15T15:16:23+08:00/cn/docs/performance/api-preformance/2023-06-17T14:43:04+08:00/cn/docs/quickstart/hugegraph-loader/2023-10-07T16:52:41+08:00/cn/docs/clients/restful-api/propertykey/2023-05-19T05:15:56-05:00/cn/docs/changelog/hugegraph-0.11.2-release-notes/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.12.0-release-notes/2023-01-01T16:16:43+08:00/cn/docs/performance/api-preformance/hugegraph-api-0.5.6-cassandra/2023-01-01T16:16:43+08:00/cn/docs/contribution-guidelines/subscribe/2023-06-17T14:43:04+08:00/cn/docs/config/config-authentication/2023-09-19T14:14:13+08:00/cn/docs/clients/gremlin-console/2023-06-12T23:52:07+08:00/cn/docs/guides/custom-plugin/2023-09-19T14:14:13+08:00/cn/docs/performance/hugegraph-loader-performance/2022-04-17T11:36:55+08:00/cn/docs/quickstart/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.10.4-release-notes/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/vertexlabel/2022-04-17T11:36:55+08:00/cn/docs/quickstart/hugegraph-hubble/2023-10-09T21:10:07+08:00/cn/docs/contribution-guidelines/validate-release/2023-02-15T16:14:21+08:00/cn/docs/guides/backup-restore/2022-04-17T11:36:55+08:00/cn/docs/config/2022-04-17T11:36:55+08:00/cn/docs/config/config-https/2022-04-17T11:36:55+08:00/cn/docs/quickstart/hugegraph-client/2023-10-09T17:41:59+08:00/cn/docs/clients/restful-api/edgelabel/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.9.2-release-notes/2022-04-17T11:36:55+08:00/cn/docs/contribution-guidelines/hugegraph-server-idea-setup/2023-06-25T21:06:07+08:00/cn/docs/clients/2022-04-17T11:36:55+08:00/cn/docs/config/config-computer/2023-01-01T16:16:43+08:00/cn/docs/guides/faq/2023-01-04T22:59:07+08:00/cn/docs/clients/restful-api/indexlabel/2022-04-17T11:36:55+08:00/cn/docs/quickstart/hugegraph-tools/2023-10-09T17:41:59+08:00/cn/docs/changelog/hugegraph-0.8.0-release-notes/2022-04-17T11:36:55+08:00/cn/docs/quickstart/hugegraph-computer/2023-10-09T17:41:59+08:00/cn/docs/guides/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/rebuild/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.7.4-release-notes/2022-04-17T11:36:55+08:00/cn/docs/language/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.6.1-release-notes/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/vertex/2023-06-04T23:04:47+08:00/cn/docs/clients/restful-api/edge/2023-06-29T10:17:29+08:00/cn/docs/performance/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.5.6-release-notes/2022-04-17T11:36:55+08:00/cn/docs/changelog/2022-04-17T11:36:55+08:00/cn/docs/contribution-guidelines/2022-12-30T19:57:48+08:00/cn/docs/changelog/hugegraph-0.4.4-release-notes/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/traverser/2023-09-15T11:15:58+08:00/cn/docs/clients/restful-api/rank/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.3.3-release-notes/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.2-release-notes/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/variable/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/graphs/2023-09-18T17:50:28+08:00/cn/docs/changelog/hugegraph-0.2.4-release-notes/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/task/2023-09-19T14:14:13+08:00/cn/docs/clients/restful-api/gremlin/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/cypher/2023-07-31T23:55:30+08:00/cn/docs/clients/restful-api/auth/2023-07-31T23:55:30+08:00/cn/docs/clients/restful-api/other/2023-07-31T23:55:30+08:00/cn/docs/2022-12-30T19:57:48+08:00/cn/blog/news/2022-04-17T11:36:55+08:00/cn/blog/releases/2022-04-17T11:36:55+08:00/cn/blog/2018/10/06/easy-documentation-with-docsy/2022-04-17T11:36:55+08:00/cn/blog/2018/10/06/the-second-blog-post/2022-04-17T11:36:55+08:00/cn/blog/2018/01/04/another-great-release/2022-04-17T11:36:55+08:00/cn/docs/cla/2022-04-17T11:36:55+08:00/cn/docs/performance/hugegraph-benchmark-0.4.4/2022-09-15T15:16:23+08:00/cn/docs/summary/2023-10-09T17:41:59+08:00/cn/blog/2022-04-17T11:36:55+08:00/cn/categories//cn/community/2022-04-17T11:36:55+08:00/cn/2023-01-04T22:59:07+08:00/cn/search/2022-04-17T11:36:55+08:00/cn/tags/ \ No newline at end of file +/cn/docs/guides/architectural/2023-06-25T21:06:07+08:00/cn/docs/config/config-guide/2023-11-01T21:52:52+08:00/cn/docs/language/hugegraph-gremlin/2023-01-01T16:16:43+08:00/cn/docs/performance/hugegraph-benchmark-0.5.6/2022-09-15T15:16:23+08:00/cn/docs/quickstart/hugegraph-server/2023-11-01T21:52:52+08:00/cn/docs/introduction/readme/2023-06-18T14:57:33+08:00/cn/docs/changelog/hugegraph-1.0.0-release-notes/2023-01-09T07:41:46+08:00/cn/docs/clients/restful-api/2023-11-01T21:52:52+08:00/cn/docs/clients/restful-api/schema/2023-05-14T19:35:13+08:00/cn/docs/performance/api-preformance/hugegraph-api-0.5.6-rocksdb/2023-01-01T16:16:43+08:00/cn/docs/contribution-guidelines/contribute/2023-09-09T20:50:32+08:00/cn/docs/config/config-option/2023-09-19T14:14:13+08:00/cn/docs/guides/desgin-concept/2022-04-17T11:36:55+08:00/cn/docs/download/download/2023-06-17T14:43:04+08:00/cn/docs/language/hugegraph-example/2023-02-02T01:21:10+08:00/cn/docs/clients/hugegraph-client/2022-09-15T15:16:23+08:00/cn/docs/performance/api-preformance/2023-06-17T14:43:04+08:00/cn/docs/quickstart/hugegraph-loader/2023-10-07T16:52:41+08:00/cn/docs/clients/restful-api/propertykey/2023-05-19T05:15:56-05:00/cn/docs/changelog/hugegraph-0.11.2-release-notes/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.12.0-release-notes/2023-01-01T16:16:43+08:00/cn/docs/performance/api-preformance/hugegraph-api-0.5.6-cassandra/2023-01-01T16:16:43+08:00/cn/docs/contribution-guidelines/subscribe/2023-06-17T14:43:04+08:00/cn/docs/config/config-authentication/2023-09-19T14:14:13+08:00/cn/docs/clients/gremlin-console/2023-06-12T23:52:07+08:00/cn/docs/guides/custom-plugin/2023-09-19T14:14:13+08:00/cn/docs/performance/hugegraph-loader-performance/2022-04-17T11:36:55+08:00/cn/docs/quickstart/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.10.4-release-notes/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/vertexlabel/2022-04-17T11:36:55+08:00/cn/docs/quickstart/hugegraph-hubble/2023-11-01T21:52:52+08:00/cn/docs/contribution-guidelines/validate-release/2023-02-15T16:14:21+08:00/cn/docs/guides/backup-restore/2022-04-17T11:36:55+08:00/cn/docs/config/2022-04-17T11:36:55+08:00/cn/docs/config/config-https/2022-04-17T11:36:55+08:00/cn/docs/quickstart/hugegraph-client/2023-10-09T17:41:59+08:00/cn/docs/clients/restful-api/edgelabel/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.9.2-release-notes/2022-04-17T11:36:55+08:00/cn/docs/contribution-guidelines/hugegraph-server-idea-setup/2023-06-25T21:06:07+08:00/cn/docs/clients/2022-04-17T11:36:55+08:00/cn/docs/config/config-computer/2023-01-01T16:16:43+08:00/cn/docs/guides/faq/2023-01-04T22:59:07+08:00/cn/docs/clients/restful-api/indexlabel/2022-04-17T11:36:55+08:00/cn/docs/quickstart/hugegraph-tools/2023-10-09T17:41:59+08:00/cn/docs/changelog/hugegraph-0.8.0-release-notes/2022-04-17T11:36:55+08:00/cn/docs/quickstart/hugegraph-computer/2023-10-09T17:41:59+08:00/cn/docs/guides/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/rebuild/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.7.4-release-notes/2022-04-17T11:36:55+08:00/cn/docs/language/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.6.1-release-notes/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/vertex/2023-06-04T23:04:47+08:00/cn/docs/clients/restful-api/edge/2023-06-29T10:17:29+08:00/cn/docs/performance/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.5.6-release-notes/2022-04-17T11:36:55+08:00/cn/docs/changelog/2022-04-17T11:36:55+08:00/cn/docs/contribution-guidelines/2022-12-30T19:57:48+08:00/cn/docs/changelog/hugegraph-0.4.4-release-notes/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/traverser/2023-09-15T11:15:58+08:00/cn/docs/clients/restful-api/rank/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.3.3-release-notes/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.2-release-notes/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/variable/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/graphs/2023-09-18T17:50:28+08:00/cn/docs/changelog/hugegraph-0.2.4-release-notes/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/task/2023-09-19T14:14:13+08:00/cn/docs/clients/restful-api/gremlin/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/cypher/2023-07-31T23:55:30+08:00/cn/docs/clients/restful-api/auth/2023-07-31T23:55:30+08:00/cn/docs/clients/restful-api/other/2023-07-31T23:55:30+08:00/cn/docs/2022-12-30T19:57:48+08:00/cn/blog/news/2022-04-17T11:36:55+08:00/cn/blog/releases/2022-04-17T11:36:55+08:00/cn/blog/2018/10/06/easy-documentation-with-docsy/2022-04-17T11:36:55+08:00/cn/blog/2018/10/06/the-second-blog-post/2022-04-17T11:36:55+08:00/cn/blog/2018/01/04/another-great-release/2022-04-17T11:36:55+08:00/cn/docs/cla/2022-04-17T11:36:55+08:00/cn/docs/performance/hugegraph-benchmark-0.4.4/2022-09-15T15:16:23+08:00/cn/docs/summary/2023-10-09T17:41:59+08:00/cn/blog/2022-04-17T11:36:55+08:00/cn/categories//cn/community/2022-04-17T11:36:55+08:00/cn/2023-01-04T22:59:07+08:00/cn/search/2022-04-17T11:36:55+08:00/cn/tags/ \ No newline at end of file diff --git a/docs/_print/index.html b/docs/_print/index.html index 59cdad962..e78dc238c 100644 --- a/docs/_print/index.html +++ b/docs/_print/index.html @@ -5,14 +5,15 @@ implemented the Apache TinkerPop3 framework and is fully compatible with the Gremlin query language, With complete toolchain components, it helps users easily build applications and products based on graph databases. HugeGraph supports fast import of more than 10 billion vertices and edges, and provides millisecond-level relational query capability (OLTP). It supports large-scale distributed graph computing (OLAP).

    Typical application scenarios of HugeGraph include deep relationship exploration, association analysis, path search, feature extraction, data clustering, community detection, knowledge graph, etc., and are applicable to business fields such as network security, telecommunication fraud, financial risk control, advertising recommendation, social network, and intelligence Robots, etc.

    Features

    HugeGraph supports graph operations in online and offline environments, supports batch import of data, supports efficient complex relationship analysis, and can be seamlessly integrated with big data platforms. -HugeGraph supports multi-user parallel operations. Users can enter Gremlin query statements and get graph query results in time. They can also call HugeGraph API in user programs for graph analysis or query.

    This system has the following features:

    • Ease of use: HugeGraph supports Gremlin graph query language and RESTful API, provides common interfaces for graph retrieval, and has peripheral tools with complete functions to easily implement various graph-based query and analysis operations.
    • Efficiency: HugeGraph has been deeply optimized in graph storage and graph computing, and provides a variety of batch import tools, which can easily complete the rapid import of tens of billions of data, and achieve millisecond-level response for graph retrieval through optimized queries. Supports simultaneous online real-time operations of thousands of users.
    • Universal: HugeGraph supports the Apache Gremlin standard graph query language and the Property Graph standard graph modeling method, and supports graph-based OLTP and OLAP schemes. Integrate Apache Hadoop and Apache Spark big data platforms.
    • Scalable: supports distributed storage, multiple copies of data, and horizontal expansion, built-in multiple back-end storage engines, and can easily expand the back-end storage engine through plug-ins.
    • Open: HugeGraph code is open source (Apache 2 License), customers can modify and customize independently, and selectively give back to the open-source community.

    The functions of this system include but are not limited to:

    • Supports batch import of data from multiple data sources (including local files, HDFS files, MySQL databases, and other data sources), and supports import of multiple file formats (including TXT, CSV, JSON, and other formats)
    • With a visual operation interface, it can be used for operation, analysis, and display diagrams, reducing the threshold for users to use
    • Optimized graph interface: shortest path (Shortest Path), K-step connected subgraph (K-neighbor), K-step to reach the adjacent point (K-out), personalized recommendation algorithm PersonalRank, etc.
    • Implemented based on Apache TinkerPop3 framework, supports Gremlin graph query language
    • Support attribute graph, attributes can be added to vertices and edges, and support rich attribute types
    • Has independent schema metadata information, has powerful graph modeling capabilities, and facilitates third-party system integration
    • Support multi-vertex ID strategy: support primary key ID, support automatic ID generation, support user-defined string ID, support user-defined digital ID
    • The attributes of edges and vertices can be indexed to support precise query, range query, and full-text search
    • The storage system adopts plug-in mode, supporting RocksDB, Cassandra, ScyllaDB, HBase, MySQL, PostgreSQL, Palo, and InMemory, etc.
    • Integrate with big data systems such as Hadoop and Spark GraphX, and support Bulk Load operations
    • Support high availability HA, multiple copies of data, backup recovery, monitoring, etc.

    Modules

    • HugeGraph-Server: HugeGraph-Server is the core part of the HugeGraph project, including submodules such as Core, Backend, and API;
      • Core: Graph engine implementation, connecting the Backend module downward and supporting the API module upward;
      • Backend: Realize the storage of graph data to the backend. The supported backends include: Memory, Cassandra, ScyllaDB, RocksDB, HBase, MySQL, and PostgreSQL. Users can choose one according to the actual situation;
      • API: Built-in REST Server, provides RESTful API to users, and is fully compatible with Gremlin query.
    • HugeGraph-Client: HugeGraph-Client provides a RESTful API client for connecting to HugeGraph-Server. Currently, only Java version is implemented. Users of other languages can implement it by themselves;
    • HugeGraph-Loader: HugeGraph-Loader is a data import tool based on HugeGraph-Client, which converts ordinary text data into graph vertices and edges and inserts them into graph database;
    • HugeGraph-Computer: HugeGraph-Computer is a distributed graph processing system for HugeGraph (OLAP). It is an implementation of Pregel. It runs on the Kubernetes framework;
    • HugeGraph-Hubble: HugeGraph-Hubble is HugeGraph’s web visualization management platform, a one-stop visual analysis platform. The platform covers the whole process from data modeling, to rapid data import, to online and offline analysis of data, and unified management of graphs;
    • HugeGraph-Tools: HugeGraph-Tools is HugeGraph’s deployment and management tools, including functions such as managing graphs, backup/restore, Gremlin execution, etc.

    Contact Us

    QR png

    2 - Download HugeGraph

    Latest version

    The latest HugeGraph: 1.0.0, released on 2023-02-22(how to build from source).

    componentsdescriptiondownload
    HugeGraph-ServerThe main program of HugeGraph1.0.0(alternate)
    HugeGraph-ToolchainA collection of tools for graph data import/export/backup, web visualization, etc.1.0.0(alternate)

    Binary Versions mapping

    VersionRelease DateservertoolchaincomputerRelease Notes
    1.0.02023-02-22[Binary] [Sign] [SHA512][Binary] [Sign] [SHA512][Binary] [Sign] [SHA512]Release-Notes

    Source Versions mapping

    VersionRelease DateservertoolchaincomputercommonRelease Notes
    1.0.02023-02-22[Source] [Sign] [SHA512][Source] [Sign] [SHA512][Source] [Sign] [SHA512][Source] [Sign] [SHA512]Release-Notes

    Outdated Versions Mapping

    serverclientloaderhubblecommontools
    0.12.02.0.10.12.01.6.02.0.11.6.0
    0.11.21.9.10.11.11.5.01.8.11.5.0
    0.10.41.8.00.10.10.10.01.6.161.4.0
    0.9.21.7.00.9.00.9.01.6.01.3.0
    0.8.01.6.40.8.00.8.01.5.31.2.0
    0.7.41.5.80.7.00.7.01.4.91.1.0
    0.6.11.5.60.6.10.6.11.4.31.0.0
    0.5.61.5.00.5.60.5.01.4.0
    0.4.51.4.70.2.20.4.11.3.12

    Note: The latest graph analysis and display platform is Hubble, which supports server v0.10 +.

    3 - Quick Start

    3.1 - HugeGraph-Server Quick Start

    1 HugeGraph-Server Overview

    HugeGraph-Server is the core part of the HugeGraph Project, contains submodules such as Core, Backend, API.

    The Core Module is an implementation of the Tinkerpop interface; The Backend module is used to save the graph data to the data store, currently supported backends include: Memory, Cassandra, ScyllaDB, RocksDB; The API Module provides HTTP Server, which converts Client’s HTTP request into a call to Core Module.

    There will be two spellings HugeGraph-Server and HugeGraphServer in the document, and other modules are similar. There is no big difference in the meaning of these two ways of writing, which can be distinguished as follows: HugeGraph-Server represents the code of server-related components, HugeGraphServer represents the service process.

    2 Dependency for Building/Running

    2.1 Install Java 11 (JDK 11)

    Consider use Java 11 to run HugeGraph-Server (also compatible with Java 8 now), and configure by yourself.

    Be sure to execute the java -version command to check the jdk version before reading

    3 Deploy

    There are four ways to deploy HugeGraph-Server components:

    • Method 1: Use Docker container (recommended)
    • Method 2: Download the binary tarball
    • Method 3: Source code compilation
    • Method 4: One-click deployment

    You can refer to Docker deployment guide.

    We can use docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph to quickly start an inner HugeGraph server with RocksDB in background.

    Optional:

    1. use docker exec -it graph bash to enter the container to do some operations.
    2. use docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph to start with a built-in example graph.

    Also, we can use docker-compose to deploy, with docker-compose up -d. Here is an example docker-compose.yml:

    version: '3'
    +HugeGraph supports multi-user parallel operations. Users can enter Gremlin query statements and get graph query results in time. They can also call HugeGraph API in user programs for graph analysis or query.

    This system has the following features:

    • Ease of use: HugeGraph supports Gremlin graph query language and RESTful API, provides common interfaces for graph retrieval, and has peripheral tools with complete functions to easily implement various graph-based query and analysis operations.
    • Efficiency: HugeGraph has been deeply optimized in graph storage and graph computing, and provides a variety of batch import tools, which can easily complete the rapid import of tens of billions of data, and achieve millisecond-level response for graph retrieval through optimized queries. Supports simultaneous online real-time operations of thousands of users.
    • Universal: HugeGraph supports the Apache Gremlin standard graph query language and the Property Graph standard graph modeling method, and supports graph-based OLTP and OLAP schemes. Integrate Apache Hadoop and Apache Spark big data platforms.
    • Scalable: supports distributed storage, multiple copies of data, and horizontal expansion, built-in multiple back-end storage engines, and can easily expand the back-end storage engine through plug-ins.
    • Open: HugeGraph code is open source (Apache 2 License), customers can modify and customize independently, and selectively give back to the open-source community.

    The functions of this system include but are not limited to:

    • Supports batch import of data from multiple data sources (including local files, HDFS files, MySQL databases, and other data sources), and supports import of multiple file formats (including TXT, CSV, JSON, and other formats)
    • With a visual operation interface, it can be used for operation, analysis, and display diagrams, reducing the threshold for users to use
    • Optimized graph interface: shortest path (Shortest Path), K-step connected subgraph (K-neighbor), K-step to reach the adjacent point (K-out), personalized recommendation algorithm PersonalRank, etc.
    • Implemented based on Apache TinkerPop3 framework, supports Gremlin graph query language
    • Support attribute graph, attributes can be added to vertices and edges, and support rich attribute types
    • Has independent schema metadata information, has powerful graph modeling capabilities, and facilitates third-party system integration
    • Support multi-vertex ID strategy: support primary key ID, support automatic ID generation, support user-defined string ID, support user-defined digital ID
    • The attributes of edges and vertices can be indexed to support precise query, range query, and full-text search
    • The storage system adopts plug-in mode, supporting RocksDB, Cassandra, ScyllaDB, HBase, MySQL, PostgreSQL, Palo, and InMemory, etc.
    • Integrate with big data systems such as Hadoop and Spark GraphX, and support Bulk Load operations
    • Support high availability HA, multiple copies of data, backup recovery, monitoring, etc.

    Modules

    • HugeGraph-Server: HugeGraph-Server is the core part of the HugeGraph project, including submodules such as Core, Backend, and API;
      • Core: Graph engine implementation, connecting the Backend module downward and supporting the API module upward;
      • Backend: Realize the storage of graph data to the backend. The supported backends include: Memory, Cassandra, ScyllaDB, RocksDB, HBase, MySQL, and PostgreSQL. Users can choose one according to the actual situation;
      • API: Built-in REST Server, provides RESTful API to users, and is fully compatible with Gremlin query.
    • HugeGraph-Client: HugeGraph-Client provides a RESTful API client for connecting to HugeGraph-Server. Currently, only Java version is implemented. Users of other languages can implement it by themselves;
    • HugeGraph-Loader: HugeGraph-Loader is a data import tool based on HugeGraph-Client, which converts ordinary text data into graph vertices and edges and inserts them into graph database;
    • HugeGraph-Computer: HugeGraph-Computer is a distributed graph processing system for HugeGraph (OLAP). It is an implementation of Pregel. It runs on the Kubernetes framework;
    • HugeGraph-Hubble: HugeGraph-Hubble is HugeGraph’s web visualization management platform, a one-stop visual analysis platform. The platform covers the whole process from data modeling, to rapid data import, to online and offline analysis of data, and unified management of graphs;
    • HugeGraph-Tools: HugeGraph-Tools is HugeGraph’s deployment and management tools, including functions such as managing graphs, backup/restore, Gremlin execution, etc.

    Contact Us

    QR png

    2 - Download HugeGraph

    Latest version

    The latest HugeGraph: 1.0.0, released on 2023-02-22(how to build from source).

    componentsdescriptiondownload
    HugeGraph-ServerThe main program of HugeGraph1.0.0(alternate)
    HugeGraph-ToolchainA collection of tools for graph data import/export/backup, web visualization, etc.1.0.0(alternate)

    Binary Versions mapping

    VersionRelease DateservertoolchaincomputerRelease Notes
    1.0.02023-02-22[Binary] [Sign] [SHA512][Binary] [Sign] [SHA512][Binary] [Sign] [SHA512]Release-Notes

    Source Versions mapping

    VersionRelease DateservertoolchaincomputercommonRelease Notes
    1.0.02023-02-22[Source] [Sign] [SHA512][Source] [Sign] [SHA512][Source] [Sign] [SHA512][Source] [Sign] [SHA512]Release-Notes

    Outdated Versions Mapping

    serverclientloaderhubblecommontools
    0.12.02.0.10.12.01.6.02.0.11.6.0
    0.11.21.9.10.11.11.5.01.8.11.5.0
    0.10.41.8.00.10.10.10.01.6.161.4.0
    0.9.21.7.00.9.00.9.01.6.01.3.0
    0.8.01.6.40.8.00.8.01.5.31.2.0
    0.7.41.5.80.7.00.7.01.4.91.1.0
    0.6.11.5.60.6.10.6.11.4.31.0.0
    0.5.61.5.00.5.60.5.01.4.0
    0.4.51.4.70.2.20.4.11.3.12

    Note: The latest graph analysis and display platform is Hubble, which supports server v0.10 +.

    3 - Quick Start

    3.1 - HugeGraph-Server Quick Start

    1 HugeGraph-Server Overview

    HugeGraph-Server is the core part of the HugeGraph Project, contains submodules such as Core, Backend, API.

    The Core Module is an implementation of the Tinkerpop interface; The Backend module is used to save the graph data to the data store, currently supported backends include: Memory, Cassandra, ScyllaDB, RocksDB; The API Module provides HTTP Server, which converts Client’s HTTP request into a call to Core Module.

    There will be two spellings HugeGraph-Server and HugeGraphServer in the document, and other modules are similar. There is no big difference in the meaning of these two ways of writing, which can be distinguished as follows: HugeGraph-Server represents the code of server-related components, HugeGraphServer represents the service process.

    2 Dependency for Building/Running

    2.1 Install Java 11 (JDK 11)

    Consider use Java 11 to run HugeGraph-Server (also compatible with Java 8 now), and configure by yourself.

    Be sure to execute the java -version command to check the jdk version before reading

    3 Deploy

    There are four ways to deploy HugeGraph-Server components:

    • Method 1: Use Docker container (recommended)
    • Method 2: Download the binary tarball
    • Method 3: Source code compilation
    • Method 4: One-click deployment

    You can refer to Docker deployment guide.

    We can use docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph to quickly start an inner HugeGraph server with RocksDB in background.

    Optional:

    1. use docker exec -it graph bash to enter the container to do some operations.
    2. use docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph to start with a built-in example graph. We can use RESTful API to verify the result. The detailed step can refer to 5.1.1

    Also, if we want to manage the other Hugegraph related instances in one file, we can use docker-compose to deploy, with the command docker-compose up -d (you can config only server). Here is an example docker-compose.yml:

    version: '3'
     services:
       graph:
         image: hugegraph/hugegraph
    -    #environment:
    +    # environment:
         #  - PRELOAD=true
    +    # PRELOAD is a option to preload a build-in sample graph when initializing.
         ports:
    -      - 18080:8080
    +      - 8080:8080
     

    3.2 Download the binary tar tarball

    You could download the binary tarball from the download page of ASF site like this:

    # use the latest version, here is 1.0.0 for example
     wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz
     tar zxf *hugegraph*.tar.gz
    @@ -61,7 +62,7 @@
     cd *hugegraph*/*tool* 
     

    note: ${version} is the version, The latest version can refer to Download Page, or click the link to download directly from the Download page

    The general entry script for HugeGraph-Tools is bin/hugegraph, Users can use the help command to view its usage, here only the commands for one-click deployment are introduced.

    bin/hugegraph deploy -v {hugegraph-version} -p {install-path} [-u {download-path-prefix}]
     

    {hugegraph-version} indicates the version of HugeGraphServer and HugeGraphStudio to be deployed, users can view the conf/version-mapping.yaml file for version information, {install-path} specify the installation directory of HugeGraphServer and HugeGraphStudio, {download-path-prefix} optional, specify the download address of HugeGraphServer and HugeGraphStudio tarball, use default download URL if not provided, for example, to start HugeGraph-Server and HugeGraphStudio version 0.6, write the above command as bin/hugegraph deploy -v 0.6 -p services.

    4 Config

    If you need to quickly start HugeGraph just for testing, then you only need to modify a few configuration items (see next section). -for detailed configuration introduction, please refer to configuration document and introduction to configuration items

    5 Startup

    5.1 Use Docker to startup

    In 3.1 Use Docker container, we have introduced how to use docker to deploy hugegraph-server. server can also preload an example graph by setting the parameter.

    5.1.1 Create example graph when starting server

    Set the environment variable PRELOAD=true when starting Docker in order to load data during the execution of the startup script.

    1. Use docker run

      Use docker run -itd --name=graph -p 18080:8080 -e PRELOAD=true hugegraph/hugegraph:latest

    2. Use docker-compose

      Create docker-compose.yml as following

      version: '3'
      +for detailed configuration introduction, please refer to configuration document and introduction to configuration items

      5 Startup

      5.1 Use Docker to startup

      In 3.1 Use Docker container, we have introduced how to use docker to deploy hugegraph-server. server can also preload an example graph by setting the parameter.

      5.1.1 Create example graph when starting server

      Set the environment variable PRELOAD=true when starting Docker in order to load data during the execution of the startup script.

      1. Use docker run

        Use docker run -itd --name=graph -p 8080:8080 -e PRELOAD=true hugegraph/hugegraph:latest

      2. Use docker-compose

        Create docker-compose.yml as following. We should set the environment variable PRELOAD=true. example.groovy is a predefined script to preload the sample data. If needed, we can mount a new example.groovy to change the preload data.

        version: '3'
           services:
             graph:
               image: hugegraph/hugegraph:latest
        @@ -69,7 +70,7 @@
               environment:
                 - PRELOAD=true
               ports:
        -        - 18080:8080
        +        - 8080:8080
         

        Use docker-compose up -d to start the container

      And use the RESTful API to request HugeGraphServer and get the following result:

      > curl "http://localhost:8080/graphs/hugegraph/graph/vertices" | gunzip
       
       {"vertices":[{"id":"2:lop","label":"software","type":"vertex","properties":{"name":"lop","lang":"java","price":328}},{"id":"1:josh","label":"person","type":"vertex","properties":{"name":"josh","age":32,"city":"Beijing"}},{"id":"1:marko","label":"person","type":"vertex","properties":{"name":"marko","age":29,"city":"Beijing"}},{"id":"1:peter","label":"person","type":"vertex","properties":{"name":"peter","age":35,"city":"Shanghai"}},{"id":"1:vadas","label":"person","type":"vertex","properties":{"name":"vadas","age":27,"city":"Hongkong"}},{"id":"2:ripple","label":"software","type":"vertex","properties":{"name":"ripple","lang":"java","price":199}}]}
      @@ -84,7 +85,7 @@
       serializer=binary
       rocksdb.data_path=.
       rocksdb.wal_path=.
      -

      Initialize the database (required only on first startup)

      cd *hugegraph-${version}
      +

      Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)

      cd *hugegraph-${version}
       bin/init-store.sh
       

      Start server

      bin/start-hugegraph.sh
       Starting HugeGraphServer...
      @@ -102,7 +103,7 @@
       
       #cassandra.keyspace.strategy=SimpleStrategy
       #cassandra.keyspace.replication=3
      -

      Initialize the database (required only on first startup)

      cd *hugegraph-${version}
      +

      Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)

      cd *hugegraph-${version}
       bin/init-store.sh
       Initing HugeGraph Store...
       2017-12-01 11:26:51 1424  [main] [INFO ] org.apache.hugegraph.HugeGraph [] - Opening backend store: 'cassandra'
      @@ -137,7 +138,7 @@
       
       #cassandra.keyspace.strategy=SimpleStrategy
       #cassandra.keyspace.replication=3
      -

      Since the scylladb database itself is an “optimized version” based on cassandra, if the user does not have scylladb installed, they can also use cassandra as the backend storage directly. They only need to change the backend and serializer to scylladb, and the host and post point to the seeds and port of the cassandra cluster. Yes, but it is not recommended to do so, it will not take advantage of scylladb itself.

      Initialize the database (required only on first startup)

      cd *hugegraph-${version}
      +

      Since the scylladb database itself is an “optimized version” based on cassandra, if the user does not have scylladb installed, they can also use cassandra as the backend storage directly. They only need to change the backend and serializer to scylladb, and the host and post point to the seeds and port of the cassandra cluster. Yes, but it is not recommended to do so, it will not take advantage of scylladb itself.

      Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)

      cd *hugegraph-${version}
       bin/init-store.sh
       

      Start server

      bin/start-hugegraph.sh
       Starting HugeGraphServer...
      @@ -153,7 +154,7 @@
       #hbase.enable_partition=true
       #hbase.vertex_partitions=10
       #hbase.edge_partitions=30
      -

      Initialize the database (required only on first startup)

      cd *hugegraph-${version}
      +

      Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)

      cd *hugegraph-${version}
       bin/init-store.sh
       

      Start server

      bin/start-hugegraph.sh
       Starting HugeGraphServer...
      @@ -171,7 +172,7 @@
       jdbc.reconnect_max_times=3
       jdbc.reconnect_interval=3
       jdbc.ssl_mode=false
      -

      Initialize the database (required only on first startup)

      cd *hugegraph-${version}
      +

      Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)

      cd *hugegraph-${version}
       bin/init-store.sh
       

      Start server

      bin/start-hugegraph.sh
       Starting HugeGraphServer...
      @@ -239,7 +240,7 @@
               ...
           ]
       }
      -

      For detailed API, please refer to RESTful-API

      7 Stop Server

      $cd *hugegraph-${version}
      +

      For detailed API, please refer to RESTful-API

      You can also visit localhost:8080/swagger-ui/index.html to check the API.

      image

      7 Stop Server

      $cd *hugegraph-${version}
       $bin/stop-hugegraph.sh
       

      8 Debug Server with IntelliJ IDEA

      Please refer to Setup Server in IDEA

      3.2 - HugeGraph-Loader Quick Start

      1 HugeGraph-Loader Overview

      HugeGraph-Loader is the data import component of HugeGraph, which can convert data from various data sources into graph vertices and edges and import them into the graph database in batches.

      Currently supported data sources include:

      • Local disk file or directory, supports TEXT, CSV and JSON format files, supports compressed files
      • HDFS file or directory, supports compressed files
      • Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server

      Local disk files and HDFS files support resumable uploads.

      It will be explained in detail below.

      Note: HugeGraph-Loader requires HugeGraph Server service, please refer to HugeGraph-Server Quick Start to download and start Server

      2 Get HugeGraph-Loader

      There are two ways to get HugeGraph-Loader:

      • Download the compiled tarball
      • Clone source code then compile and install

      2.1 Download the compiled archive

      Download the latest version of the HugeGraph-Toolchain release package:

      wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
       tar zxf *hugegraph*.tar.gz
      @@ -717,13 +718,13 @@
       --deploy-mode cluster --name spark-hugegraph-loader --file ./hugegraph.json \
       --username admin --token admin --host xx.xx.xx.xx --port 8093 \
       --graph graph-test --num-executors 6 --executor-cores 16 --executor-memory 15g
      -

      3.3 - HugeGraph-Hubble Quick Start

      1 HugeGraph-Hubble Overview

      HugeGraph is an analysis-oriented graph database system that supports batch operations, which fully supports Apache TinkerPop3 framework and Gremlin graph query language. It provides a complete tool chain ecology such as export, backup, and recovery, and effectively solve the storage, query and correlation analysis needs of massive graph data. HugeGraph is widely used in the fields of risk control, insurance claims, recommendation search, public security crime crackdown, knowledge graph, network security, IT operation and maintenance of bank securities companies, and is committed to allowing more industries, organizations and users to enjoy a wider range of data comprehensive value.

      HugeGraph-Hubble is HugeGraph’s one-stop visual analysis platform. The platform covers the whole process from data modeling, to efficient data import, to real-time and offline analysis of data, and unified management of graphs, realizing the whole process wizard of graph application. It is designed to improve the user’s use fluency, lower the user’s use threshold, and provide a more efficient and easy-to-use user experience.

      The platform mainly includes the following modules:

      Graph Management

      The graph management module realizes the unified management of multiple graphs and graph access, editing, deletion, and query by creating graph and connecting the platform and graph data.

      Metadata Modeling

      The metadata modeling module realizes the construction and management of graph models by creating attribute libraries, vertex types, edge types, and index types. The platform provides two modes, list mode and graph mode, which can display the metadata model in real time, which is more intuitive. At the same time, it also provides a metadata reuse function across graphs, which saves the tedious and repetitive creation process of the same metadata, greatly improves modeling efficiency and enhances ease of use.

      Data Import

      Data import is to convert the user’s business data into the vertices and edges of the graph and insert it into the graph database. The platform provides a wizard-style visual import module. By creating import tasks, the management of import tasks and the parallel operation of multiple import tasks are realized. Improve import performance. After entering the import task, you only need to follow the platform step prompts, upload files as needed, and fill in the content to easily implement the import process of graph data. At the same time, it supports breakpoint resuming, error retry mechanism, etc., which reduces import costs and improves efficiency.

      Graph Analysis

      By inputting the graph traversal language Gremlin, high-performance general analysis of graph data can be realized, and functions such as customized multidimensional path query of vertices can be provided, and three kinds of graph result display methods are provided, including: graph form, table form, Json form, and multidimensional display. The data form meets the needs of various scenarios used by users. It provides functions such as running records and collection of common statements, realizing the traceability of graph operations, and the reuse and sharing of query input, which is fast and efficient. It supports the export of graph data, and the export format is Json format.

      Task Management

      For Gremlin tasks that need to traverse the whole graph, index creation and reconstruction and other time-consuming asynchronous tasks, the platform provides corresponding task management functions to achieve unified management and result viewing of asynchronous tasks.

      2 Deploy

      There are three ways to deplot hugegraph-hubble

      • Use Docker (recommended)
      • Download the Toolchain binary package
      • Source code compilation

      Special Note: If you are starting hubble with Docker, and hubble and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to localhost/127.0.0.1. This will refer to the hubble container internally rather than the host machine, resulting in a connection failure to the server. If hubble and server is in the same docker network, you can use the container_name as the hostname, and 8080 as the port. Or you can use the ip of the host as the hostname, and the port is configured by the host for the server.

      We can use docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble to quick start hubble.

      Alternatively, you can use Docker Compose to start hubble. Additionally, if hubble and the graph are in the same Docker network, you can access the graph using the container name of the graph, eliminating the need for the host machine’s IP address.

      Use docker-compose up -ddocker-compose.yml is following:

      version: '3'
      +

      3.3 - HugeGraph-Hubble Quick Start

      1 HugeGraph-Hubble Overview

      HugeGraph is an analysis-oriented graph database system that supports batch operations, which fully supports Apache TinkerPop3 framework and Gremlin graph query language. It provides a complete tool chain ecology such as export, backup, and recovery, and effectively solve the storage, query and correlation analysis needs of massive graph data. HugeGraph is widely used in the fields of risk control, insurance claims, recommendation search, public security crime crackdown, knowledge graph, network security, IT operation and maintenance of bank securities companies, and is committed to allowing more industries, organizations and users to enjoy a wider range of data comprehensive value.

      HugeGraph-Hubble is HugeGraph’s one-stop visual analysis platform. The platform covers the whole process from data modeling, to efficient data import, to real-time and offline analysis of data, and unified management of graphs, realizing the whole process wizard of graph application. It is designed to improve the user’s use fluency, lower the user’s use threshold, and provide a more efficient and easy-to-use user experience.

      The platform mainly includes the following modules:

      Graph Management

      The graph management module realizes the unified management of multiple graphs and graph access, editing, deletion, and query by creating graph and connecting the platform and graph data.

      Metadata Modeling

      The metadata modeling module realizes the construction and management of graph models by creating attribute libraries, vertex types, edge types, and index types. The platform provides two modes, list mode and graph mode, which can display the metadata model in real time, which is more intuitive. At the same time, it also provides a metadata reuse function across graphs, which saves the tedious and repetitive creation process of the same metadata, greatly improves modeling efficiency and enhances ease of use.

      Data Import

      Data import is to convert the user’s business data into the vertices and edges of the graph and insert it into the graph database. The platform provides a wizard-style visual import module. By creating import tasks, the management of import tasks and the parallel operation of multiple import tasks are realized. Improve import performance. After entering the import task, you only need to follow the platform step prompts, upload files as needed, and fill in the content to easily implement the import process of graph data. At the same time, it supports breakpoint resuming, error retry mechanism, etc., which reduces import costs and improves efficiency.

      Graph Analysis

      By inputting the graph traversal language Gremlin, high-performance general analysis of graph data can be realized, and functions such as customized multidimensional path query of vertices can be provided, and three kinds of graph result display methods are provided, including: graph form, table form, Json form, and multidimensional display. The data form meets the needs of various scenarios used by users. It provides functions such as running records and collection of common statements, realizing the traceability of graph operations, and the reuse and sharing of query input, which is fast and efficient. It supports the export of graph data, and the export format is Json format.

      Task Management

      For Gremlin tasks that need to traverse the whole graph, index creation and reconstruction and other time-consuming asynchronous tasks, the platform provides corresponding task management functions to achieve unified management and result viewing of asynchronous tasks.

      2 Deploy

      There are three ways to deplot hugegraph-hubble

      • Use Docker (recommended)
      • Download the Toolchain binary package
      • Source code compilation

      Special Note: If you are starting hubble with Docker, and hubble and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to localhost/127.0.0.1. This will refer to the hubble container internally rather than the host machine, resulting in a connection failure to the server.

      If hubble and server is in the same docker network, we recommend using the container_name (in our example, it is graph) as the hostname, and 8080 as the port. Or you can use the host IP as the hostname, and the port is configured by the host for the server.

      We can use docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble to quick start hubble.

      Alternatively, you can use Docker Compose to start hubble. Additionally, if hubble and the graph are in the same Docker network, you can access the graph using the container name of the graph, eliminating the need for the host machine’s IP address.

      Use docker-compose up -ddocker-compose.yml is following:

      version: '3'
       services:
         server:
           image: hugegraph/hugegraph
           container_name: graph
           ports:
      -      - 18080:8080
      +      - 8080:8080
       
         hubble:
           image: hugegraph/hubble
      @@ -754,7 +755,7 @@
       mvn -e compile package -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -ntp
       cd apache-hugegraph-hubble-incubating*
       

      Run hubble

      bin/start-hubble.sh -d
      -

      3 Platform Workflow

      The module usage process of the platform is as follows:

      image

      4 Platform Instructions

      4.1 Graph Management

      4.1.1 Graph creation

      Under the graph management module, click [Create graph], and realize the connection of multiple graphs by filling in the graph ID, graph name, host name, port number, username, and password information.

      image

      Create graph by filling in the content as follows::

      image
      4.1.2 Graph Access

      Realize the information access of the graph space. After entering, you can perform operations such as multidimensional query analysis, metadata management, data import, and algorithm analysis of the graph.

      image
      4.1.3 Graph management
      1. Users can achieve unified management of graphs through overview, search, and information editing and deletion of single graphs.
      2. Search range: You can search for the graph name and ID.
      image

      4.2 Metadata Modeling (list + graph mode)

      4.2.1 Module entry

      Left navigation:

      image
      4.2.2 Property type
      4.2.2.1 Create type
      1. Fill in or select the attribute name, data type, and cardinality to complete the creation of the attribute.
      2. Created attributes can be used as attributes of vertex type and edge type.

      List mode:

      image

      Graph mode:

      image
      4.2.2.2 Reuse
      1. The platform provides the [Reuse] function, which can directly reuse the metadata of other graphs.
      2. Select the graph ID that needs to be reused, and continue to select the attributes that need to be reused. After that, the platform will check whether there is a conflict. After passing, the metadata can be reused.

      Select reuse items:

      image

      Check reuse items:

      image
      4.2.2.3 Management
      1. You can delete a single item or delete it in batches in the attribute list.
      4.2.3 Vertex type
      4.2.3.1 Create type
      1. Fill in or select the vertex type name, ID strategy, association attribute, primary key attribute, vertex style, content displayed below the vertex in the query result, and index information: including whether to create a type index, and the specific content of the attribute index, complete the vertex Type creation.

      List mode:

      image

      Graph mode:

      image
      4.2.3.2 Reuse
      1. The multiplexing of vertex types will reuse the attributes and attribute indexes associated with this type together.
      2. The reuse method is similar to the property reuse, see 3.2.2.2.
      4.2.3.3 Administration
      1. Editing operations are available. The vertex style, association type, vertex display content, and attribute index can be edited, and the rest cannot be edited.

      2. You can delete a single item or delete it in batches.

      image
      4.2.4 Edge Types
      4.2.4.1 Create
      1. Fill in or select the edge type name, start point type, end point type, associated attributes, whether to allow multiple connections, edge style, content displayed below the edge in the query result, and index information: including whether to create a type index, and attribute index The specific content, complete the creation of the edge type.

      List mode:

      image

      Graph mode:

      image
      4.2.4.2 Reuse
      1. The reuse of the edge type will reuse the start point type, end point type, associated attribute and attribute index of this type.
      2. The reuse method is similar to the property reuse, see 3.2.2.2.
      4.2.4.3 Administration
      1. Editing operations are available. Edge styles, associated attributes, edge display content, and attribute indexes can be edited, and the rest cannot be edited, the same as the vertex type.
      2. You can delete a single item or delete it in batches.
      4.2.5 Index Types

      Displays vertex and edge indices for vertex types and edge types.

      4.3 Data Import

      The usage process of data import is as follows:

      image
      4.3.1 Module entrance

      Left navigation:

      image
      4.3.2 Create task
      1. Fill in the task name and remarks (optional) to create an import task.
      2. Multiple import tasks can be created and imported in parallel.
      image
      4.3.3 Uploading files
      1. Upload the file that needs to be composed. The currently supported format is CSV, which will be updated continuously in the future.
      2. Multiple files can be uploaded at the same time.
      image
      4.3.4 Setting up data mapping
      1. Set up data mapping for uploaded files, including file settings and type settings

      2. File settings: Check or fill in whether to include the header, separator, encoding format and other settings of the file itself, all set the default values, no need to fill in manually

      3. Type setting:

        1. Vertex map and edge map:

          【Vertex Type】: Select the vertex type, and upload the column data in the file for its ID mapping;

          【Edge Type】: Select the edge type and map the column data of the uploaded file to the ID column of its start point type and end point type;

        2. Mapping settings: upload the column data in the file for the attribute mapping of the selected vertex type. Here, if the attribute name is the same as the header name of the file, the mapping attribute can be automatically matched, and there is no need to manually fill in the selection.

        3. After completing the setting, the setting list will be displayed before proceeding to the next step. It supports the operations of adding, editing and deleting mappings.

      Fill in the settings map:

      image

      Mapping list:

      image
      4.3.5 Import data

      Before importing, you need to fill in the import setting parameters. After filling in, you can start importing data into the gallery.

      1. Import settings
      • The import setting parameter items are as shown in the figure below, all set the default value, no need to fill in manually
      image
      1. Import details
      • Click Start Import to start the file import task
      • The import details provide the mapping type, import speed, import progress, time-consuming and the specific status of the current task set for each uploaded file, and can pause, resume, stop and other operations for each task
      • If the import fails, you can view the specific reason
      image

      4.4 Data Analysis

      4.4.1 Module entry

      Left navigation:

      image
      4.4.2 Multi-image switching

      By switching the entrance on the left, flexibly switch the operation space of multiple graphs

      image
      4.4.3 Graph Analysis and Processing

      HugeGraph supports Gremlin, a graph traversal query language of Apache TinkerPop3. Gremlin is a general graph database query language. By entering Gremlin statements and clicking execute, you can perform query and analysis operations on graph data, and create and delete vertices/edges. , vertex/edge attribute modification, etc.

      After Gremlin query, below is the graph result display area, which provides 3 kinds of graph result display modes: [Graph Mode], [Table Mode], [Json Mode].

      Support zoom, center, full screen, export and other operations.

      【Picture Mode】

      image

      【Table mode】

      image

      【Json mode】

      image
      4.4.4 Data Details

      Click the vertex/edge entity to view the data details of the vertex/edge, including: vertex/edge type, vertex ID, attribute and corresponding value, expand the information display dimension of the graph, and improve the usability.

      4.4.5 Multidimensional Path Query of Graph Results

      In addition to the global query, in-depth customized query and hidden operations can be performed for the vertices in the query result to realize customized mining of graph results.

      Right-click a vertex, and the menu entry of the vertex appears, which can be displayed, inquired, hidden, etc.

      • Expand: Click to display the vertices associated with the selected point.
      • Query: By selecting the edge type and edge direction associated with the selected point, and then selecting its attributes and corresponding filtering rules under this condition, a customized path display can be realized.
      • Hide: When clicked, hides the selected point and its associated edges.

      Double-clicking a vertex also displays the vertex associated with the selected point.

      image
      4.4.6 Add vertex/edge
      4.4.6.1 Added vertex

      In the graph area, two entries can be used to dynamically add vertices, as follows:

      1. Click on the graph area panel, the Add Vertex entry appears
      2. Click the first icon in the action bar in the upper right corner

      Complete the addition of vertices by selecting or filling in the vertex type, ID value, and attribute information.

      The entry is as follows:

      image

      Add the vertex content as follows:

      image
      4.4.6.2 Add edge

      Right-click a vertex in the graph result to add the outgoing or incoming edge of that point.

      4.4.7 Execute the query of records and favorites
      1. Record each query record at the bottom of the graph area, including: query time, execution type, content, status, time-consuming, as well as [collection] and [load] operations, to achieve a comprehensive record of graph execution, with traces to follow, and Can quickly load and reuse execution content
      2. Provides the function of collecting sentences, which can be used to collect frequently used sentences, which is convenient for fast calling of high-frequency sentences.
      image

      4.5 Task Management

      4.5.1 Module entry

      Left navigation:

      image
      4.5.2 Task Management
      1. Provide unified management and result viewing of asynchronous tasks. There are 4 types of asynchronous tasks, namely:
      • gremlin: Gremlin tasks
      • algorithm: OLAP algorithm task
      • remove_schema: remove metadata
      • rebuild_index: rebuild the index
      1. The list displays the asynchronous task information of the current graph, including: task ID, task name, task type, creation time, time-consuming, status, operation, and realizes the management of asynchronous tasks.
      2. Support filtering by task type and status
      3. Support searching for task ID and task name
      4. Asynchronous tasks can be deleted or deleted in batches
      image
      4.5.3 Gremlin asynchronous tasks
      1. Create a task
      • The data analysis module currently supports two Gremlin operations, Gremlin query and Gremlin task; if the user switches to the Gremlin task, after clicking execute, an asynchronous task will be created in the asynchronous task center;
      1. Task submission
      • After the task is submitted successfully, the graph area returns the submission result and task ID
      1. Mission details
      • Provide [View] entry, you can jump to the task details to view the specific execution of the current task After jumping to the task center, the currently executing task line will be displayed directly
      image

      Click to view the entry to jump to the task management list, as follows:

      image
      1. View the results
      • The results are displayed in the form of json
      4.5.4 OLAP algorithm tasks

      There is no visual OLAP algorithm execution on Hubble. You can call the RESTful API to perform OLAP algorithm tasks, find the corresponding tasks by ID in the task management, and view the progress and results.

      4.5.5 Delete metadata, rebuild index
      1. Create a task
      • In the metadata modeling module, when deleting metadata, an asynchronous task for deleting metadata can be created
      image
      • When editing an existing vertex/edge type operation, when adding an index, an asynchronous task of creating an index can be created
      image
      1. Task details
      • After confirming/saving, you can jump to the task center to view the details of the current task
      image

      3.4 - HugeGraph-Client Quick Start

      1 Overview Of Hugegraph

      HugeGraph-Client sends HTTP request to HugeGraph-Server to obtain and parse the execution result of Server. Currently only the HugeGraph-Client for Java is provided. You can use HugeGraph-Client to write Java code to operate HugeGraph, such as adding, deleting, modifying, and querying schema and graph data, or executing gremlin statements.

      2 What You Need

      • Java 11 (also support Java 8)
      • Maven 3.5+

      3 How To Use

      The basic steps to use HugeGraph-Client are as follows:

      • Build a new Maven project by IDEA or Eclipse
      • Add HugeGraph-Client dependency in pom file;
      • Create an object to invoke the interface of HugeGraph-Client

      See the complete example in the following section for the detail.

      4 Complete Example

      4.1 Build New Maven Project

      Using IDEA or Eclipse to create the project:

      4.2 Add Hugegraph-Client Dependency In POM

      <dependencies>
      +

      3 Platform Workflow

      The module usage process of the platform is as follows:

      image

      4 Platform Instructions

      4.1 Graph Management

      4.1.1 Graph creation

      Under the graph management module, click [Create graph], and realize the connection of multiple graphs by filling in the graph ID, graph name, host name, port number, username, and password information.

      image

      Create graph by filling in the content as follows:

      image

      Special Note: If you are starting hubble with Docker, and hubble and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to localhost/127.0.0.1. If hubble and server is in the same docker network, we recommend using the container_name (in our example, it is graph) as the hostname, and 8080 as the port. Or you can use the host IP as the hostname, and the port is configured by the host for the server.

      4.1.2 Graph Access

      Realize the information access of the graph space. After entering, you can perform operations such as multidimensional query analysis, metadata management, data import, and algorithm analysis of the graph.

      image
      4.1.3 Graph management
      1. Users can achieve unified management of graphs through overview, search, and information editing and deletion of single graphs.
      2. Search range: You can search for the graph name and ID.
      image

      4.2 Metadata Modeling (list + graph mode)

      4.2.1 Module entry

      Left navigation:

      image
      4.2.2 Property type
      4.2.2.1 Create type
      1. Fill in or select the attribute name, data type, and cardinality to complete the creation of the attribute.
      2. Created attributes can be used as attributes of vertex type and edge type.

      List mode:

      image

      Graph mode:

      image
      4.2.2.2 Reuse
      1. The platform provides the [Reuse] function, which can directly reuse the metadata of other graphs.
      2. Select the graph ID that needs to be reused, and continue to select the attributes that need to be reused. After that, the platform will check whether there is a conflict. After passing, the metadata can be reused.

      Select reuse items:

      image

      Check reuse items:

      image
      4.2.2.3 Management
      1. You can delete a single item or delete it in batches in the attribute list.
      4.2.3 Vertex type
      4.2.3.1 Create type
      1. Fill in or select the vertex type name, ID strategy, association attribute, primary key attribute, vertex style, content displayed below the vertex in the query result, and index information: including whether to create a type index, and the specific content of the attribute index, complete the vertex Type creation.

      List mode:

      image

      Graph mode:

      image
      4.2.3.2 Reuse
      1. The multiplexing of vertex types will reuse the attributes and attribute indexes associated with this type together.
      2. The reuse method is similar to the property reuse, see 3.2.2.2.
      4.2.3.3 Administration
      1. Editing operations are available. The vertex style, association type, vertex display content, and attribute index can be edited, and the rest cannot be edited.

      2. You can delete a single item or delete it in batches.

      image
      4.2.4 Edge Types
      4.2.4.1 Create
      1. Fill in or select the edge type name, start point type, end point type, associated attributes, whether to allow multiple connections, edge style, content displayed below the edge in the query result, and index information: including whether to create a type index, and attribute index The specific content, complete the creation of the edge type.

      List mode:

      image

      Graph mode:

      image
      4.2.4.2 Reuse
      1. The reuse of the edge type will reuse the start point type, end point type, associated attribute and attribute index of this type.
      2. The reuse method is similar to the property reuse, see 3.2.2.2.
      4.2.4.3 Administration
      1. Editing operations are available. Edge styles, associated attributes, edge display content, and attribute indexes can be edited, and the rest cannot be edited, the same as the vertex type.
      2. You can delete a single item or delete it in batches.
      4.2.5 Index Types

      Displays vertex and edge indices for vertex types and edge types.

      4.3 Data Import

      Note:currently, we recommend to use hugegraph-loader to import data formally. The built-in import of hubble is used for testing and getting started.

      The usage process of data import is as follows:

      image
      4.3.1 Module entrance

      Left navigation:

      image
      4.3.2 Create task
      1. Fill in the task name and remarks (optional) to create an import task.
      2. Multiple import tasks can be created and imported in parallel.
      image
      4.3.3 Uploading files
      1. Upload the file that needs to be composed. The currently supported format is CSV, which will be updated continuously in the future.
      2. Multiple files can be uploaded at the same time.
      image
      4.3.4 Setting up data mapping
      1. Set up data mapping for uploaded files, including file settings and type settings

      2. File settings: Check or fill in whether to include the header, separator, encoding format and other settings of the file itself, all set the default values, no need to fill in manually

      3. Type setting:

        1. Vertex map and edge map:

          【Vertex Type】: Select the vertex type, and upload the column data in the file for its ID mapping;

          【Edge Type】: Select the edge type and map the column data of the uploaded file to the ID column of its start point type and end point type;

        2. Mapping settings: upload the column data in the file for the attribute mapping of the selected vertex type. Here, if the attribute name is the same as the header name of the file, the mapping attribute can be automatically matched, and there is no need to manually fill in the selection.

        3. After completing the setting, the setting list will be displayed before proceeding to the next step. It supports the operations of adding, editing and deleting mappings.

      Fill in the settings map:

      image

      Mapping list:

      image
      4.3.5 Import data

      Before importing, you need to fill in the import setting parameters. After filling in, you can start importing data into the gallery.

      1. Import settings
      • The import setting parameter items are as shown in the figure below, all set the default value, no need to fill in manually
      image
      1. Import details
      • Click Start Import to start the file import task
      • The import details provide the mapping type, import speed, import progress, time-consuming and the specific status of the current task set for each uploaded file, and can pause, resume, stop and other operations for each task
      • If the import fails, you can view the specific reason
      image

      4.4 Data Analysis

      4.4.1 Module entry

      Left navigation:

      image
      4.4.2 Multi-image switching

      By switching the entrance on the left, flexibly switch the operation space of multiple graphs

      image
      4.4.3 Graph Analysis and Processing

      HugeGraph supports Gremlin, a graph traversal query language of Apache TinkerPop3. Gremlin is a general graph database query language. By entering Gremlin statements and clicking execute, you can perform query and analysis operations on graph data, and create and delete vertices/edges. , vertex/edge attribute modification, etc.

      After Gremlin query, below is the graph result display area, which provides 3 kinds of graph result display modes: [Graph Mode], [Table Mode], [Json Mode].

      Support zoom, center, full screen, export and other operations.

      【Picture Mode】

      image

      【Table mode】

      image

      【Json mode】

      image
      4.4.4 Data Details

      Click the vertex/edge entity to view the data details of the vertex/edge, including: vertex/edge type, vertex ID, attribute and corresponding value, expand the information display dimension of the graph, and improve the usability.

      4.4.5 Multidimensional Path Query of Graph Results

      In addition to the global query, in-depth customized query and hidden operations can be performed for the vertices in the query result to realize customized mining of graph results.

      Right-click a vertex, and the menu entry of the vertex appears, which can be displayed, inquired, hidden, etc.

      • Expand: Click to display the vertices associated with the selected point.
      • Query: By selecting the edge type and edge direction associated with the selected point, and then selecting its attributes and corresponding filtering rules under this condition, a customized path display can be realized.
      • Hide: When clicked, hides the selected point and its associated edges.

      Double-clicking a vertex also displays the vertex associated with the selected point.

      image
      4.4.6 Add vertex/edge
      4.4.6.1 Added vertex

      In the graph area, two entries can be used to dynamically add vertices, as follows:

      1. Click on the graph area panel, the Add Vertex entry appears
      2. Click the first icon in the action bar in the upper right corner

      Complete the addition of vertices by selecting or filling in the vertex type, ID value, and attribute information.

      The entry is as follows:

      image

      Add the vertex content as follows:

      image
      4.4.6.2 Add edge

      Right-click a vertex in the graph result to add the outgoing or incoming edge of that point.

      4.4.7 Execute the query of records and favorites
      1. Record each query record at the bottom of the graph area, including: query time, execution type, content, status, time-consuming, as well as [collection] and [load] operations, to achieve a comprehensive record of graph execution, with traces to follow, and Can quickly load and reuse execution content
      2. Provides the function of collecting sentences, which can be used to collect frequently used sentences, which is convenient for fast calling of high-frequency sentences.
      image

      4.5 Task Management

      4.5.1 Module entry

      Left navigation:

      image
      4.5.2 Task Management
      1. Provide unified management and result viewing of asynchronous tasks. There are 4 types of asynchronous tasks, namely:
      • gremlin: Gremlin tasks
      • algorithm: OLAP algorithm task
      • remove_schema: remove metadata
      • rebuild_index: rebuild the index
      1. The list displays the asynchronous task information of the current graph, including: task ID, task name, task type, creation time, time-consuming, status, operation, and realizes the management of asynchronous tasks.
      2. Support filtering by task type and status
      3. Support searching for task ID and task name
      4. Asynchronous tasks can be deleted or deleted in batches
      image
      4.5.3 Gremlin asynchronous tasks
      1. Create a task
      • The data analysis module currently supports two Gremlin operations, Gremlin query and Gremlin task; if the user switches to the Gremlin task, after clicking execute, an asynchronous task will be created in the asynchronous task center;
      1. Task submission
      • After the task is submitted successfully, the graph area returns the submission result and task ID
      1. Mission details
      • Provide [View] entry, you can jump to the task details to view the specific execution of the current task After jumping to the task center, the currently executing task line will be displayed directly
      image

      Click to view the entry to jump to the task management list, as follows:

      image
      1. View the results
      • The results are displayed in the form of json
      4.5.4 OLAP algorithm tasks

      There is no visual OLAP algorithm execution on Hubble. You can call the RESTful API to perform OLAP algorithm tasks, find the corresponding tasks by ID in the task management, and view the progress and results.

      4.5.5 Delete metadata, rebuild index
      1. Create a task
      • In the metadata modeling module, when deleting metadata, an asynchronous task for deleting metadata can be created
      image
      • When editing an existing vertex/edge type operation, when adding an index, an asynchronous task of creating an index can be created
      image
      1. Task details
      • After confirming/saving, you can jump to the task center to view the details of the current task
      image

      3.4 - HugeGraph-Client Quick Start

      1 Overview Of Hugegraph

      HugeGraph-Client sends HTTP request to HugeGraph-Server to obtain and parse the execution result of Server. Currently only the HugeGraph-Client for Java is provided. You can use HugeGraph-Client to write Java code to operate HugeGraph, such as adding, deleting, modifying, and querying schema and graph data, or executing gremlin statements.

      2 What You Need

      • Java 11 (also support Java 8)
      • Maven 3.5+

      3 How To Use

      The basic steps to use HugeGraph-Client are as follows:

      • Build a new Maven project by IDEA or Eclipse
      • Add HugeGraph-Client dependency in pom file;
      • Create an object to invoke the interface of HugeGraph-Client

      See the complete example in the following section for the detail.

      4 Complete Example

      4.1 Build New Maven Project

      Using IDEA or Eclipse to create the project:

      4.2 Add Hugegraph-Client Dependency In POM

      <dependencies>
           <dependency>
               <groupId>org.apache.hugegraph</groupId>
               <artifactId>hugegraph-client</artifactId>
      @@ -1704,7 +1705,7 @@
       
      $ ./bin/start-hugegraph.sh
       
       Starting HugeGraphServer...
      -Connecting to HugeGraphServer (http://127.0.0.1:18080/graphs)...OK
      +Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)...OK
       Started [pid 21614]
       

      Check out created graphs:

      curl http://127.0.0.1:8080/graphs/
       
      @@ -1780,7 +1781,7 @@
       Country Code: CN
       
      1. Export the server certificate based on the server’s private key.
      keytool -export -alias serverkey -keystore server.keystore -file server.crt
       

      server.crt is the server’s certificate.

      Client

      keytool -import -alias serverkey -file server.crt -keystore client.truststore
      -

      client.truststore is for the client’s use and contains the trusted certificate.

      4.5 - HugeGraph-Computer Config

      Computer Config Options

      config optiondefault valuedescription
      algorithm.message_classorg.apache.hugegraph.computer.core.config.NullThe class of message passed when compute vertex.
      algorithm.params_classorg.apache.hugegraph.computer.core.config.NullThe class used to transfer algorithms’ parameters before algorithm been run.
      algorithm.result_classorg.apache.hugegraph.computer.core.config.NullThe class of vertex’s value, the instance is used to store computation result for the vertex.
      allocator.max_vertices_per_thread10000Maximum number of vertices per thread processed in each memory allocator
      bsp.etcd_endpointshttp://localhost:2379The end points to access etcd.
      bsp.log_interval30000The log interval(in ms) to print the log while waiting bsp event.
      bsp.max_super_step10The max super step of the algorithm.
      bsp.register_timeout300000The max timeout to wait for master and works to register.
      bsp.wait_master_timeout86400000The max timeout(in ms) to wait for master bsp event.
      bsp.wait_workers_timeout86400000The max timeout to wait for workers bsp event.
      hgkv.max_data_block_size65536The max byte size of hgkv-file data block.
      hgkv.max_file_size2147483648The max number of bytes in each hgkv-file.
      hgkv.max_merge_files10The max number of files to merge at one time.
      hgkv.temp_file_dir/tmp/hgkvThis folder is used to store temporary files, temporary files will be generated during the file merging process.
      hugegraph.namehugegraphThe graph name to load data and write results back.
      hugegraph.urlhttp://127.0.0.1:8080The hugegraph url to load data and write results back.
      input.edge_directionOUTThe data of the edge in which direction is loaded, when the value is BOTH, the edges in both OUT and IN direction will be loaded.
      input.edge_freqMULTIPLEThe frequency of edges can exist between a pair of vertices, allowed values: [SINGLE, SINGLE_PER_LABEL, MULTIPLE]. SINGLE means that only one edge can exist between a pair of vertices, use sourceId + targetId to identify it; SINGLE_PER_LABEL means that each edge label can exist one edge between a pair of vertices, use sourceId + edgelabel + targetId to identify it; MULTIPLE means that many edge can exist between a pair of vertices, use sourceId + edgelabel + sortValues + targetId to identify it.
      input.filter_classorg.apache.hugegraph.computer.core.input.filter.DefaultInputFilterThe class to create input-filter object, input-filter is used to Filter vertex edges according to user needs.
      input.loader_schema_pathThe schema path of loader input, only takes effect when the input.source_type=loader is enabled
      input.loader_struct_pathThe struct path of loader input, only takes effect when the input.source_type=loader is enabled
      input.max_edges_in_one_vertex200The maximum number of adjacent edges allowed to be attached to a vertex, the adjacent edges will be stored and transferred together as a batch unit.
      input.source_typehugegraph-serverThe source type to load input data, allowed values: [‘hugegraph-server’, ‘hugegraph-loader’], the ‘hugegraph-loader’ means use hugegraph-loader load data from HDFS or file, if use ‘hugegraph-loader’ load data then please config ‘input.loader_struct_path’ and ‘input.loader_schema_path’.
      input.split_fetch_timeout300The timeout in seconds to fetch input splits
      input.split_max_splits10000000The maximum number of input splits
      input.split_page_size500The page size for streamed load input split data
      input.split_size1048576The input split size in bytes
      job.idlocal_0001The job id on Yarn cluster or K8s cluster.
      job.partitions_count1The partitions count for computing one graph algorithm job.
      job.partitions_thread_nums4The number of threads for partition parallel compute.
      job.workers_count1The workers count for computing one graph algorithm job.
      master.computation_classorg.apache.hugegraph.computer.core.master.DefaultMasterComputationMaster-computation is computation that can determine whether to continue next superstep. It runs at the end of each superstep on master.
      output.batch_size500The batch size of output
      output.batch_threads1The threads number used to batch output
      output.hdfs_core_site_pathThe hdfs core site path.
      output.hdfs_delimiter,The delimiter of hdfs output.
      output.hdfs_kerberos_enablefalseIs Kerberos authentication enabled for Hdfs.
      output.hdfs_kerberos_keytabThe Hdfs’s key tab file for kerberos authentication.
      output.hdfs_kerberos_principalThe Hdfs’s principal for kerberos authentication.
      output.hdfs_krb5_conf/etc/krb5.confKerberos configuration file.
      output.hdfs_merge_partitionstrueWhether merge output files of multiple partitions.
      output.hdfs_path_prefix/hugegraph-computer/resultsThe directory of hdfs output result.
      output.hdfs_replication3The replication number of hdfs.
      output.hdfs_site_pathThe hdfs site path.
      output.hdfs_urlhdfs://127.0.0.1:9000The hdfs url of output.
      output.hdfs_userhadoopThe hdfs user of output.
      output.output_classorg.apache.hugegraph.computer.core.output.LogOutputThe class to output the computation result of each vertex. Be called after iteration computation.
      output.result_namevalueThe value is assigned dynamically by #name() of instance created by WORKER_COMPUTATION_CLASS.
      output.result_write_typeOLAP_COMMONThe result write-type to output to hugegraph, allowed values are: [OLAP_COMMON, OLAP_SECONDARY, OLAP_RANGE].
      output.retry_interval10The retry interval when output failed
      output.retry_times3The retry times when output failed
      output.single_threads1The threads number used to single output
      output.thread_pool_shutdown_timeout60The timeout seconds of output threads pool shutdown
      output.with_adjacent_edgesfalseOutput the adjacent edges of the vertex or not
      output.with_edge_propertiesfalseOutput the properties of the edge or not
      output.with_vertex_propertiesfalseOutput the properties of the vertex or not
      sort.thread_nums4The number of threads performing internal sorting.
      transport.client_connect_timeout3000The timeout(in ms) of client connect to server.
      transport.client_threads4The number of transport threads for client.
      transport.close_timeout10000The timeout(in ms) of close server or close client.
      transport.finish_session_timeout0The timeout(in ms) to finish session, 0 means using (transport.sync_request_timeout * transport.max_pending_requests).
      transport.heartbeat_interval20000The minimum interval(in ms) between heartbeats on client side.
      transport.io_modeAUTOThe network IO Mode, either ‘NIO’, ‘EPOLL’, ‘AUTO’, the ‘AUTO’ means selecting the property mode automatically.
      transport.max_pending_requests8The max number of client unreceived ack, it will trigger the sending unavailable if the number of unreceived ack >= max_pending_requests.
      transport.max_syn_backlog511The capacity of SYN queue on server side, 0 means using system default value.
      transport.max_timeout_heartbeat_count120The maximum times of timeout heartbeat on client side, if the number of timeouts waiting for heartbeat response continuously > max_heartbeat_timeouts the channel will be closed from client side.
      transport.min_ack_interval200The minimum interval(in ms) of server reply ack.
      transport.min_pending_requests6The minimum number of client unreceived ack, it will trigger the sending available if the number of unreceived ack < min_pending_requests.
      transport.network_retries3The number of retry attempts for network communication,if network unstable.
      transport.provider_classorg.apache.hugegraph.computer.core.network.netty.NettyTransportProviderThe transport provider, currently only supports Netty.
      transport.receive_buffer_size0The size of socket receive-buffer in bytes, 0 means using system default value.
      transport.recv_file_modetrueWhether enable receive buffer-file mode, it will receive buffer write file from socket by zero-copy if enable.
      transport.send_buffer_size0The size of socket send-buffer in bytes, 0 means using system default value.
      transport.server_host127.0.0.1The server hostname or ip to listen on to transfer data.
      transport.server_idle_timeout360000The max timeout(in ms) of server idle.
      transport.server_port0The server port to listen on to transfer data. The system will assign a random port if it’s set to 0.
      transport.server_threads4The number of transport threads for server.
      transport.sync_request_timeout10000The timeout(in ms) to wait response after sending sync-request.
      transport.tcp_keep_alivetrueWhether enable TCP keep-alive.
      transport.transport_epoll_ltfalseWhether enable EPOLL level-trigger.
      transport.write_buffer_high_mark67108864The high water mark for write buffer in bytes, it will trigger the sending unavailable if the number of queued bytes > write_buffer_high_mark.
      transport.write_buffer_low_mark33554432The low water mark for write buffer in bytes, it will trigger the sending available if the number of queued bytes < write_buffer_low_mark.org.apache.hugegraph.config.OptionChecker$$Lambda$97/0x00000008001c8440@776a6d9b
      transport.write_socket_timeout3000The timeout(in ms) to write data to socket buffer.
      valuefile.max_segment_size1073741824The max number of bytes in each segment of value-file.
      worker.combiner_classorg.apache.hugegraph.computer.core.config.NullCombiner can combine messages into one value for a vertex, for example page-rank algorithm can combine messages of a vertex to a sum value.
      worker.computation_classorg.apache.hugegraph.computer.core.config.NullThe class to create worker-computation object, worker-computation is used to compute each vertex in each superstep.
      worker.data_dirs[jobs]The directories separated by ‘,’ that received vertices and messages can persist into.
      worker.edge_properties_combiner_classorg.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombinerThe combiner can combine several properties of the same edge into one properties at inputstep.
      worker.partitionerorg.apache.hugegraph.computer.core.graph.partition.HashPartitionerThe partitioner that decides which partition a vertex should be in, and which worker a partition should be in.
      worker.received_buffers_bytes_limit104857600The limit bytes of buffers of received data, the total size of all buffers can’t excess this limit. If received buffers reach this limit, they will be merged into a file.
      worker.vertex_properties_combiner_classorg.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombinerThe combiner can combine several properties of the same vertex into one properties at inputstep.
      worker.wait_finish_messages_timeout86400000The max timeout(in ms) message-handler wait for finish-message of all workers.
      worker.wait_sort_timeout600000The max timeout(in ms) message-handler wait for sort-thread to sort one batch of buffers.
      worker.write_buffer_capacity52428800The initial size of write buffer that used to store vertex or message.
      worker.write_buffer_threshold52428800The threshold of write buffer, exceeding it will trigger sorting, the write buffer is used to store vertex or message.

      K8s Operator Config Options

      NOTE: Option needs to be converted through environment variable settings, e.g. k8s.internal_etcd_url => INTERNAL_ETCD_URL

      config optiondefault valuedescription
      k8s.auto_destroy_podtrueWhether to automatically destroy all pods when the job is completed or failed.
      k8s.close_reconciler_timeout120The max timeout(in ms) to close reconciler.
      k8s.internal_etcd_urlhttp://127.0.0.1:2379The internal etcd url for operator system.
      k8s.max_reconcile_retry3The max retry times of reconcile.
      k8s.probe_backlog50The maximum backlog for serving health probes.
      k8s.probe_port9892The value is the port that the controller bind to for serving health probes.
      k8s.ready_check_internal1000The time interval(ms) of check ready.
      k8s.ready_timeout30000The max timeout(in ms) of check ready.
      k8s.reconciler_count10The max number of reconciler thread.
      k8s.resync_period600000The minimum frequency at which watched resources are reconciled.
      k8s.timezoneAsia/ShanghaiThe timezone of computer job and operator.
      k8s.watch_namespacehugegraph-computer-systemThe value is watch custom resources in the namespace, ignore other namespaces, the ‘*’ means is all namespaces will be watched.

      HugeGraph-Computer CRD

      CRD: https://github.com/apache/hugegraph-computer/blob/master/computer-k8s-operator/manifest/hugegraph-computer-crd.v1.yaml

      specdefault valuedescriptionrequired
      algorithmNameThe name of algorithm.true
      jobIdThe job id.true
      imageThe image of algorithm.true
      computerConfThe map of computer config options.true
      workerInstancesThe number of worker instances, it will instead the ‘job.workers_count’ option.true
      pullPolicyAlwaysThe pull-policy of image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policyfalse
      pullSecretsThe pull-secrets of Image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-podfalse
      masterCpuThe cpu limit of master, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpufalse
      workerCpuThe cpu limit of worker, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpufalse
      masterMemoryThe memory limit of master, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memoryfalse
      workerMemoryThe memory limit of worker, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memoryfalse
      log4jXmlThe content of log4j.xml for computer job.false
      jarFileThe jar path of computer algorithm.false
      remoteJarUriThe remote jar uri of computer algorithm, it will overlay algorithm image.false
      jvmOptionsThe java startup parameters of computer job.false
      envVarsplease refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/false
      envFromplease refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/false
      masterCommandbin/start-computer.shThe run command of master, equivalent to ‘Entrypoint’ field of Docker.false
      masterArgs["-r master", “-d k8s”]The run args of master, equivalent to ‘Cmd’ field of Docker.false
      workerCommandbin/start-computer.shThe run command of worker, equivalent to ‘Entrypoint’ field of Docker.false
      workerArgs["-r worker", “-d k8s”]The run args of worker, equivalent to ‘Cmd’ field of Docker.false
      volumesPlease refer to: https://kubernetes.io/docs/concepts/storage/volumes/false
      volumeMountsPlease refer to: https://kubernetes.io/docs/concepts/storage/volumes/false
      secretPathsThe map of k8s-secret name and mount path.false
      configMapPathsThe map of k8s-configmap name and mount path.false
      podTemplateSpecPlease refer to: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-template-v1/#PodTemplateSpecfalse
      securityContextPlease refer to: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/false

      KubeDriver Config Options

      config optiondefault valuedescription
      k8s.build_image_bash_pathThe path of command used to build image.
      k8s.enable_internal_algorithmtrueWhether enable internal algorithm.
      k8s.framework_image_urlhugegraph/hugegraph-computer:latestThe image url of computer framework.
      k8s.image_repository_passwordThe password for login image repository.
      k8s.image_repository_registryThe address for login image repository.
      k8s.image_repository_urlhugegraph/hugegraph-computerThe url of image repository.
      k8s.image_repository_usernameThe username for login image repository.
      k8s.internal_algorithm[pageRank]The name list of all internal algorithm.
      k8s.internal_algorithm_image_urlhugegraph/hugegraph-computer:latestThe image url of internal algorithm.
      k8s.jar_file_dir/cache/jars/The directory where the algorithm jar to upload location.
      k8s.kube_config~/.kube/configThe path of k8s config file.
      k8s.log4j_xml_pathThe log4j.xml path for computer job.
      k8s.namespacehugegraph-computer-systemThe namespace of hugegraph-computer system.
      k8s.pull_secret_names[]The names of pull-secret for pulling image.

      5 - API

      5.1 - HugeGraph RESTful API

      HugeGraph-Server provides interfaces for clients to operate on graphs based on the HTTP protocol through the HugeGraph-API. These interfaces primarily include the ability to add, delete, modify, and query metadata and graph data, perform traversal algorithms, handle variables, and perform other graph-related operations.

      5.1.1 - Schema API

      1.1 Schema

      HugeGraph provides a single interface to get all Schema information of a graph, including: PropertyKey, VertexLabel, EdgeLabel and IndexLabel.

      Method & Url
      GET http://localhost:8080/graphs/{graph_name}/schema
      +

      client.truststore is for the client’s use and contains the trusted certificate.

      4.5 - HugeGraph-Computer Config

      Computer Config Options

      config optiondefault valuedescription
      algorithm.message_classorg.apache.hugegraph.computer.core.config.NullThe class of message passed when compute vertex.
      algorithm.params_classorg.apache.hugegraph.computer.core.config.NullThe class used to transfer algorithms’ parameters before algorithm been run.
      algorithm.result_classorg.apache.hugegraph.computer.core.config.NullThe class of vertex’s value, the instance is used to store computation result for the vertex.
      allocator.max_vertices_per_thread10000Maximum number of vertices per thread processed in each memory allocator
      bsp.etcd_endpointshttp://localhost:2379The end points to access etcd.
      bsp.log_interval30000The log interval(in ms) to print the log while waiting bsp event.
      bsp.max_super_step10The max super step of the algorithm.
      bsp.register_timeout300000The max timeout to wait for master and works to register.
      bsp.wait_master_timeout86400000The max timeout(in ms) to wait for master bsp event.
      bsp.wait_workers_timeout86400000The max timeout to wait for workers bsp event.
      hgkv.max_data_block_size65536The max byte size of hgkv-file data block.
      hgkv.max_file_size2147483648The max number of bytes in each hgkv-file.
      hgkv.max_merge_files10The max number of files to merge at one time.
      hgkv.temp_file_dir/tmp/hgkvThis folder is used to store temporary files, temporary files will be generated during the file merging process.
      hugegraph.namehugegraphThe graph name to load data and write results back.
      hugegraph.urlhttp://127.0.0.1:8080The hugegraph url to load data and write results back.
      input.edge_directionOUTThe data of the edge in which direction is loaded, when the value is BOTH, the edges in both OUT and IN direction will be loaded.
      input.edge_freqMULTIPLEThe frequency of edges can exist between a pair of vertices, allowed values: [SINGLE, SINGLE_PER_LABEL, MULTIPLE]. SINGLE means that only one edge can exist between a pair of vertices, use sourceId + targetId to identify it; SINGLE_PER_LABEL means that each edge label can exist one edge between a pair of vertices, use sourceId + edgelabel + targetId to identify it; MULTIPLE means that many edge can exist between a pair of vertices, use sourceId + edgelabel + sortValues + targetId to identify it.
      input.filter_classorg.apache.hugegraph.computer.core.input.filter.DefaultInputFilterThe class to create input-filter object, input-filter is used to Filter vertex edges according to user needs.
      input.loader_schema_pathThe schema path of loader input, only takes effect when the input.source_type=loader is enabled
      input.loader_struct_pathThe struct path of loader input, only takes effect when the input.source_type=loader is enabled
      input.max_edges_in_one_vertex200The maximum number of adjacent edges allowed to be attached to a vertex, the adjacent edges will be stored and transferred together as a batch unit.
      input.source_typehugegraph-serverThe source type to load input data, allowed values: [‘hugegraph-server’, ‘hugegraph-loader’], the ‘hugegraph-loader’ means use hugegraph-loader load data from HDFS or file, if use ‘hugegraph-loader’ load data then please config ‘input.loader_struct_path’ and ‘input.loader_schema_path’.
      input.split_fetch_timeout300The timeout in seconds to fetch input splits
      input.split_max_splits10000000The maximum number of input splits
      input.split_page_size500The page size for streamed load input split data
      input.split_size1048576The input split size in bytes
      job.idlocal_0001The job id on Yarn cluster or K8s cluster.
      job.partitions_count1The partitions count for computing one graph algorithm job.
      job.partitions_thread_nums4The number of threads for partition parallel compute.
      job.workers_count1The workers count for computing one graph algorithm job.
      master.computation_classorg.apache.hugegraph.computer.core.master.DefaultMasterComputationMaster-computation is computation that can determine whether to continue next superstep. It runs at the end of each superstep on master.
      output.batch_size500The batch size of output
      output.batch_threads1The threads number used to batch output
      output.hdfs_core_site_pathThe hdfs core site path.
      output.hdfs_delimiter,The delimiter of hdfs output.
      output.hdfs_kerberos_enablefalseIs Kerberos authentication enabled for Hdfs.
      output.hdfs_kerberos_keytabThe Hdfs’s key tab file for kerberos authentication.
      output.hdfs_kerberos_principalThe Hdfs’s principal for kerberos authentication.
      output.hdfs_krb5_conf/etc/krb5.confKerberos configuration file.
      output.hdfs_merge_partitionstrueWhether merge output files of multiple partitions.
      output.hdfs_path_prefix/hugegraph-computer/resultsThe directory of hdfs output result.
      output.hdfs_replication3The replication number of hdfs.
      output.hdfs_site_pathThe hdfs site path.
      output.hdfs_urlhdfs://127.0.0.1:9000The hdfs url of output.
      output.hdfs_userhadoopThe hdfs user of output.
      output.output_classorg.apache.hugegraph.computer.core.output.LogOutputThe class to output the computation result of each vertex. Be called after iteration computation.
      output.result_namevalueThe value is assigned dynamically by #name() of instance created by WORKER_COMPUTATION_CLASS.
      output.result_write_typeOLAP_COMMONThe result write-type to output to hugegraph, allowed values are: [OLAP_COMMON, OLAP_SECONDARY, OLAP_RANGE].
      output.retry_interval10The retry interval when output failed
      output.retry_times3The retry times when output failed
      output.single_threads1The threads number used to single output
      output.thread_pool_shutdown_timeout60The timeout seconds of output threads pool shutdown
      output.with_adjacent_edgesfalseOutput the adjacent edges of the vertex or not
      output.with_edge_propertiesfalseOutput the properties of the edge or not
      output.with_vertex_propertiesfalseOutput the properties of the vertex or not
      sort.thread_nums4The number of threads performing internal sorting.
      transport.client_connect_timeout3000The timeout(in ms) of client connect to server.
      transport.client_threads4The number of transport threads for client.
      transport.close_timeout10000The timeout(in ms) of close server or close client.
      transport.finish_session_timeout0The timeout(in ms) to finish session, 0 means using (transport.sync_request_timeout * transport.max_pending_requests).
      transport.heartbeat_interval20000The minimum interval(in ms) between heartbeats on client side.
      transport.io_modeAUTOThe network IO Mode, either ‘NIO’, ‘EPOLL’, ‘AUTO’, the ‘AUTO’ means selecting the property mode automatically.
      transport.max_pending_requests8The max number of client unreceived ack, it will trigger the sending unavailable if the number of unreceived ack >= max_pending_requests.
      transport.max_syn_backlog511The capacity of SYN queue on server side, 0 means using system default value.
      transport.max_timeout_heartbeat_count120The maximum times of timeout heartbeat on client side, if the number of timeouts waiting for heartbeat response continuously > max_heartbeat_timeouts the channel will be closed from client side.
      transport.min_ack_interval200The minimum interval(in ms) of server reply ack.
      transport.min_pending_requests6The minimum number of client unreceived ack, it will trigger the sending available if the number of unreceived ack < min_pending_requests.
      transport.network_retries3The number of retry attempts for network communication,if network unstable.
      transport.provider_classorg.apache.hugegraph.computer.core.network.netty.NettyTransportProviderThe transport provider, currently only supports Netty.
      transport.receive_buffer_size0The size of socket receive-buffer in bytes, 0 means using system default value.
      transport.recv_file_modetrueWhether enable receive buffer-file mode, it will receive buffer write file from socket by zero-copy if enable.
      transport.send_buffer_size0The size of socket send-buffer in bytes, 0 means using system default value.
      transport.server_host127.0.0.1The server hostname or ip to listen on to transfer data.
      transport.server_idle_timeout360000The max timeout(in ms) of server idle.
      transport.server_port0The server port to listen on to transfer data. The system will assign a random port if it’s set to 0.
      transport.server_threads4The number of transport threads for server.
      transport.sync_request_timeout10000The timeout(in ms) to wait response after sending sync-request.
      transport.tcp_keep_alivetrueWhether enable TCP keep-alive.
      transport.transport_epoll_ltfalseWhether enable EPOLL level-trigger.
      transport.write_buffer_high_mark67108864The high water mark for write buffer in bytes, it will trigger the sending unavailable if the number of queued bytes > write_buffer_high_mark.
      transport.write_buffer_low_mark33554432The low water mark for write buffer in bytes, it will trigger the sending available if the number of queued bytes < write_buffer_low_mark.org.apache.hugegraph.config.OptionChecker$$Lambda$97/0x00000008001c8440@776a6d9b
      transport.write_socket_timeout3000The timeout(in ms) to write data to socket buffer.
      valuefile.max_segment_size1073741824The max number of bytes in each segment of value-file.
      worker.combiner_classorg.apache.hugegraph.computer.core.config.NullCombiner can combine messages into one value for a vertex, for example page-rank algorithm can combine messages of a vertex to a sum value.
      worker.computation_classorg.apache.hugegraph.computer.core.config.NullThe class to create worker-computation object, worker-computation is used to compute each vertex in each superstep.
      worker.data_dirs[jobs]The directories separated by ‘,’ that received vertices and messages can persist into.
      worker.edge_properties_combiner_classorg.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombinerThe combiner can combine several properties of the same edge into one properties at inputstep.
      worker.partitionerorg.apache.hugegraph.computer.core.graph.partition.HashPartitionerThe partitioner that decides which partition a vertex should be in, and which worker a partition should be in.
      worker.received_buffers_bytes_limit104857600The limit bytes of buffers of received data, the total size of all buffers can’t excess this limit. If received buffers reach this limit, they will be merged into a file.
      worker.vertex_properties_combiner_classorg.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombinerThe combiner can combine several properties of the same vertex into one properties at inputstep.
      worker.wait_finish_messages_timeout86400000The max timeout(in ms) message-handler wait for finish-message of all workers.
      worker.wait_sort_timeout600000The max timeout(in ms) message-handler wait for sort-thread to sort one batch of buffers.
      worker.write_buffer_capacity52428800The initial size of write buffer that used to store vertex or message.
      worker.write_buffer_threshold52428800The threshold of write buffer, exceeding it will trigger sorting, the write buffer is used to store vertex or message.

      K8s Operator Config Options

      NOTE: Option needs to be converted through environment variable settings, e.g. k8s.internal_etcd_url => INTERNAL_ETCD_URL

      config optiondefault valuedescription
      k8s.auto_destroy_podtrueWhether to automatically destroy all pods when the job is completed or failed.
      k8s.close_reconciler_timeout120The max timeout(in ms) to close reconciler.
      k8s.internal_etcd_urlhttp://127.0.0.1:2379The internal etcd url for operator system.
      k8s.max_reconcile_retry3The max retry times of reconcile.
      k8s.probe_backlog50The maximum backlog for serving health probes.
      k8s.probe_port9892The value is the port that the controller bind to for serving health probes.
      k8s.ready_check_internal1000The time interval(ms) of check ready.
      k8s.ready_timeout30000The max timeout(in ms) of check ready.
      k8s.reconciler_count10The max number of reconciler thread.
      k8s.resync_period600000The minimum frequency at which watched resources are reconciled.
      k8s.timezoneAsia/ShanghaiThe timezone of computer job and operator.
      k8s.watch_namespacehugegraph-computer-systemThe value is watch custom resources in the namespace, ignore other namespaces, the ‘*’ means is all namespaces will be watched.

      HugeGraph-Computer CRD

      CRD: https://github.com/apache/hugegraph-computer/blob/master/computer-k8s-operator/manifest/hugegraph-computer-crd.v1.yaml

      specdefault valuedescriptionrequired
      algorithmNameThe name of algorithm.true
      jobIdThe job id.true
      imageThe image of algorithm.true
      computerConfThe map of computer config options.true
      workerInstancesThe number of worker instances, it will instead the ‘job.workers_count’ option.true
      pullPolicyAlwaysThe pull-policy of image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policyfalse
      pullSecretsThe pull-secrets of Image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-podfalse
      masterCpuThe cpu limit of master, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpufalse
      workerCpuThe cpu limit of worker, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpufalse
      masterMemoryThe memory limit of master, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memoryfalse
      workerMemoryThe memory limit of worker, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memoryfalse
      log4jXmlThe content of log4j.xml for computer job.false
      jarFileThe jar path of computer algorithm.false
      remoteJarUriThe remote jar uri of computer algorithm, it will overlay algorithm image.false
      jvmOptionsThe java startup parameters of computer job.false
      envVarsplease refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/false
      envFromplease refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/false
      masterCommandbin/start-computer.shThe run command of master, equivalent to ‘Entrypoint’ field of Docker.false
      masterArgs["-r master", “-d k8s”]The run args of master, equivalent to ‘Cmd’ field of Docker.false
      workerCommandbin/start-computer.shThe run command of worker, equivalent to ‘Entrypoint’ field of Docker.false
      workerArgs["-r worker", “-d k8s”]The run args of worker, equivalent to ‘Cmd’ field of Docker.false
      volumesPlease refer to: https://kubernetes.io/docs/concepts/storage/volumes/false
      volumeMountsPlease refer to: https://kubernetes.io/docs/concepts/storage/volumes/false
      secretPathsThe map of k8s-secret name and mount path.false
      configMapPathsThe map of k8s-configmap name and mount path.false
      podTemplateSpecPlease refer to: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-template-v1/#PodTemplateSpecfalse
      securityContextPlease refer to: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/false

      KubeDriver Config Options

      config optiondefault valuedescription
      k8s.build_image_bash_pathThe path of command used to build image.
      k8s.enable_internal_algorithmtrueWhether enable internal algorithm.
      k8s.framework_image_urlhugegraph/hugegraph-computer:latestThe image url of computer framework.
      k8s.image_repository_passwordThe password for login image repository.
      k8s.image_repository_registryThe address for login image repository.
      k8s.image_repository_urlhugegraph/hugegraph-computerThe url of image repository.
      k8s.image_repository_usernameThe username for login image repository.
      k8s.internal_algorithm[pageRank]The name list of all internal algorithm.
      k8s.internal_algorithm_image_urlhugegraph/hugegraph-computer:latestThe image url of internal algorithm.
      k8s.jar_file_dir/cache/jars/The directory where the algorithm jar to upload location.
      k8s.kube_config~/.kube/configThe path of k8s config file.
      k8s.log4j_xml_pathThe log4j.xml path for computer job.
      k8s.namespacehugegraph-computer-systemThe namespace of hugegraph-computer system.
      k8s.pull_secret_names[]The names of pull-secret for pulling image.

      5 - API

      5.1 - HugeGraph RESTful API

      HugeGraph-Server provides interfaces for clients to operate on graphs based on the HTTP protocol through the HugeGraph-API. These interfaces primarily include the ability to add, delete, modify, and query metadata and graph data, perform traversal algorithms, handle variables, and perform other graph-related operations.

      Expect the doc below, you can also use swagger-ui to visit the RESTful API by localhost:8080/swagger-ui/index.html. Here is an example

      5.1.1 - Schema API

      1.1 Schema

      HugeGraph provides a single interface to get all Schema information of a graph, including: PropertyKey, VertexLabel, EdgeLabel and IndexLabel.

      Method & Url
      GET http://localhost:8080/graphs/{graph_name}/schema
       
       e.g: GET http://localhost:8080/graphs/hugegraph/schema
       
      Response Status
      200
      diff --git a/docs/clients/_print/index.html b/docs/clients/_print/index.html
      index 67625c601..1aecd001a 100644
      --- a/docs/clients/_print/index.html
      +++ b/docs/clients/_print/index.html
      @@ -1,6 +1,6 @@
       API | HugeGraph
       

      1 - HugeGraph RESTful API

      HugeGraph-Server provides interfaces for clients to operate on graphs based on the HTTP protocol through the HugeGraph-API. These interfaces primarily include the ability to add, delete, modify, and query metadata and graph data, perform traversal algorithms, handle variables, and perform other graph-related operations.

      1.1 - Schema API

      1.1 Schema

      HugeGraph provides a single interface to get all Schema information of a graph, including: PropertyKey, VertexLabel, EdgeLabel and IndexLabel.

      Method & Url
      GET http://localhost:8080/graphs/{graph_name}/schema
      +Click here to print.

      Return to the regular view of this page.

      API

      1 - HugeGraph RESTful API

      HugeGraph-Server provides interfaces for clients to operate on graphs based on the HTTP protocol through the HugeGraph-API. These interfaces primarily include the ability to add, delete, modify, and query metadata and graph data, perform traversal algorithms, handle variables, and perform other graph-related operations.

      Expect the doc below, you can also use swagger-ui to visit the RESTful API by localhost:8080/swagger-ui/index.html. Here is an example

      1.1 - Schema API

      1.1 Schema

      HugeGraph provides a single interface to get all Schema information of a graph, including: PropertyKey, VertexLabel, EdgeLabel and IndexLabel.

      Method & Url
      GET http://localhost:8080/graphs/{graph_name}/schema
       
       e.g: GET http://localhost:8080/graphs/hugegraph/schema
       
      Response Status
      200
      diff --git a/docs/clients/index.xml b/docs/clients/index.xml
      index 2d8607e27..6378ae9a3 100644
      --- a/docs/clients/index.xml
      +++ b/docs/clients/index.xml
      @@ -1,5 +1,6 @@
       HugeGraph – API/docs/clients/Recent content in API on HugeGraphHugo -- gohugo.ioDocs: HugeGraph RESTful API/docs/clients/restful-api/Mon, 01 Jan 0001 00:00:00 +0000/docs/clients/restful-api/
      -<p>HugeGraph-Server provides interfaces for clients to operate on graphs based on the HTTP protocol through the HugeGraph-API. These interfaces primarily include the ability to add, delete, modify, and query metadata and graph data, perform traversal algorithms, handle variables, and perform other graph-related operations.</p>Docs: HugeGraph Java Client/docs/clients/hugegraph-client/Mon, 01 Jan 0001 00:00:00 +0000/docs/clients/hugegraph-client/
      +<p>HugeGraph-Server provides interfaces for clients to operate on graphs based on the HTTP protocol through the HugeGraph-API. These interfaces primarily include the ability to add, delete, modify, and query metadata and graph data, perform traversal algorithms, handle variables, and perform other graph-related operations.</p>
      +<p>Expect the doc below, you can also use <code>swagger-ui</code> to visit the <code>RESTful API</code> by <code>localhost:8080/swagger-ui/index.html</code>. <a href="/docs/quickstart/hugegraph-server#swaggerui-example">Here is an example</a></p>Docs: HugeGraph Java Client/docs/clients/hugegraph-client/Mon, 01 Jan 0001 00:00:00 +0000/docs/clients/hugegraph-client/
       <p>The code in this document is written in <code>java</code>, but its style is very similar to <code>gremlin(groovy)</code>. The user only needs to replace the variable declaration in the code with <code>def</code> or remove it directly,
       You can convert <code>java</code> code into <code>groovy</code>; in addition, each line of statement can be without a semicolon at the end, <code>groovy</code> considers a line to be a statement.
       The <code>gremlin(groovy)</code> written by the user in <code>HugeGraph-Studio</code> can refer to the <code>java</code> code in this document, and some examples will be given below.</p>
      diff --git a/docs/clients/restful-api/_print/index.html b/docs/clients/restful-api/_print/index.html
      index ee787f6cb..6ac323fa5 100644
      --- a/docs/clients/restful-api/_print/index.html
      +++ b/docs/clients/restful-api/_print/index.html
      @@ -1,6 +1,6 @@
       HugeGraph RESTful API | HugeGraph
       

      This is the multi-page printable view of this section. -Click here to print.

      Return to the regular view of this page.

      HugeGraph RESTful API

      HugeGraph-Server provides interfaces for clients to operate on graphs based on the HTTP protocol through the HugeGraph-API. These interfaces primarily include the ability to add, delete, modify, and query metadata and graph data, perform traversal algorithms, handle variables, and perform other graph-related operations.

      1 - Schema API

      1.1 Schema

      HugeGraph provides a single interface to get all Schema information of a graph, including: PropertyKey, VertexLabel, EdgeLabel and IndexLabel.

      Method & Url
      GET http://localhost:8080/graphs/{graph_name}/schema
      +Click here to print.

      Return to the regular view of this page.

      HugeGraph RESTful API

      HugeGraph-Server provides interfaces for clients to operate on graphs based on the HTTP protocol through the HugeGraph-API. These interfaces primarily include the ability to add, delete, modify, and query metadata and graph data, perform traversal algorithms, handle variables, and perform other graph-related operations.

      Expect the doc below, you can also use swagger-ui to visit the RESTful API by localhost:8080/swagger-ui/index.html. Here is an example

      1 - Schema API

      1.1 Schema

      HugeGraph provides a single interface to get all Schema information of a graph, including: PropertyKey, VertexLabel, EdgeLabel and IndexLabel.

      Method & Url
      GET http://localhost:8080/graphs/{graph_name}/schema
       
       e.g: GET http://localhost:8080/graphs/hugegraph/schema
       
      Response Status
      200
      diff --git a/docs/clients/restful-api/index.html b/docs/clients/restful-api/index.html
      index 5cdf6009a..b3d26a30f 100644
      --- a/docs/clients/restful-api/index.html
      +++ b/docs/clients/restful-api/index.html
      @@ -4,7 +4,7 @@
        Create child page
        Create documentation issue
        Create project issue
      - Print entire section

      HugeGraph RESTful API

      HugeGraph-Server provides interfaces for clients to operate on graphs based on the HTTP protocol through the HugeGraph-API. These interfaces primarily include the ability to add, delete, modify, and query metadata and graph data, perform traversal algorithms, handle variables, and perform other graph-related operations.


      Last modified July 31, 2023: doc: added cypher api (#280) (18547af3)
      + Print entire section

      HugeGraph RESTful API

      HugeGraph-Server provides interfaces for clients to operate on graphs based on the HTTP protocol through the HugeGraph-API. These interfaces primarily include the ability to add, delete, modify, and query metadata and graph data, perform traversal algorithms, handle variables, and perform other graph-related operations.

      Expect the doc below, you can also use swagger-ui to visit the RESTful API by localhost:8080/swagger-ui/index.html. Here is an example


      diff --git a/docs/config/_print/index.html b/docs/config/_print/index.html index 2ead3b356..41da26881 100644 --- a/docs/config/_print/index.html +++ b/docs/config/_print/index.html @@ -246,7 +246,7 @@
      $ ./bin/start-hugegraph.sh
       
       Starting HugeGraphServer...
      -Connecting to HugeGraphServer (http://127.0.0.1:18080/graphs)...OK
      +Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)...OK
       Started [pid 21614]
       

      Check out created graphs:

      curl http://127.0.0.1:8080/graphs/
       
      diff --git a/docs/config/config-guide/index.html b/docs/config/config-guide/index.html
      index e615d751d..7119dafae 100644
      --- a/docs/config/config-guide/index.html
      +++ b/docs/config/config-guide/index.html
      @@ -2,10 +2,10 @@
       The directory for the configuration files is hugegraph-release/conf, and all the configurations related to the service and the graph itself …">
      @@ -260,7 +260,7 @@
       
      $ ./bin/start-hugegraph.sh
       
       Starting HugeGraphServer...
      -Connecting to HugeGraphServer (http://127.0.0.1:18080/graphs)...OK
      +Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)...OK
       Started [pid 21614]
       

      Check out created graphs:

      curl http://127.0.0.1:8080/graphs/
       
      @@ -271,7 +271,7 @@
       
      curl http://127.0.0.1:8080/graphs/hugegraph_rocksdb_backend
       
       {"name":"hugegraph_rocksdb","backend":"rocksdb"}
      -

    +
    diff --git a/docs/config/index.xml b/docs/config/index.xml index c02090adf..3ee1d1701 100644 --- a/docs/config/index.xml +++ b/docs/config/index.xml @@ -302,7 +302,7 @@ </span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>$ ./bin/start-hugegraph.sh </span></span><span style="display:flex;"><span> </span></span><span style="display:flex;"><span>Starting HugeGraphServer... -</span></span><span style="display:flex;"><span>Connecting to HugeGraphServer <span style="color:#ce5c00;font-weight:bold">(</span>http://127.0.0.1:18080/graphs<span style="color:#ce5c00;font-weight:bold">)</span>...OK +</span></span><span style="display:flex;"><span>Connecting to HugeGraphServer <span style="color:#ce5c00;font-weight:bold">(</span>http://127.0.0.1:8080/graphs<span style="color:#ce5c00;font-weight:bold">)</span>...OK </span></span><span style="display:flex;"><span>Started <span style="color:#ce5c00;font-weight:bold">[</span>pid 21614<span style="color:#ce5c00;font-weight:bold">]</span> </span></span></code></pre></div><p>Check out created graphs:</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>curl http://127.0.0.1:8080/graphs/ diff --git "a/docs/images/images-server/621swaggerui\347\244\272\344\276\213.png" "b/docs/images/images-server/621swaggerui\347\244\272\344\276\213.png" new file mode 100644 index 000000000..87a154818 Binary files /dev/null and "b/docs/images/images-server/621swaggerui\347\244\272\344\276\213.png" differ diff --git a/docs/index.xml b/docs/index.xml index d74c359d8..8087b391d 100644 --- a/docs/index.xml +++ b/docs/index.xml @@ -332,7 +332,7 @@ </span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>$ ./bin/start-hugegraph.sh </span></span><span style="display:flex;"><span> </span></span><span style="display:flex;"><span>Starting HugeGraphServer... -</span></span><span style="display:flex;"><span>Connecting to HugeGraphServer <span style="color:#ce5c00;font-weight:bold">(</span>http://127.0.0.1:18080/graphs<span style="color:#ce5c00;font-weight:bold">)</span>...OK +</span></span><span style="display:flex;"><span>Connecting to HugeGraphServer <span style="color:#ce5c00;font-weight:bold">(</span>http://127.0.0.1:8080/graphs<span style="color:#ce5c00;font-weight:bold">)</span>...OK </span></span><span style="display:flex;"><span>Started <span style="color:#ce5c00;font-weight:bold">[</span>pid 21614<span style="color:#ce5c00;font-weight:bold">]</span> </span></span></code></pre></div><p>Check out created graphs:</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>curl http://127.0.0.1:8080/graphs/ @@ -1441,17 +1441,18 @@ <p>Optional:</p> <ol> <li>use <code>docker exec -it graph bash</code> to enter the container to do some operations.</li> -<li>use <code>docker run -itd --name=graph -p 8080:8080 -e PRELOAD=&quot;true&quot; hugegraph/hugegraph</code> to start with a <strong>built-in</strong> example graph.</li> +<li>use <code>docker run -itd --name=graph -p 8080:8080 -e PRELOAD=&quot;true&quot; hugegraph/hugegraph</code> to start with a <strong>built-in</strong> example graph. We can use <code>RESTful API</code> to verify the result. The detailed step can refer to <a href="http://127.0.0.1:1313/docs/quickstart/hugegraph-server/#511-create-example-graph-when-starting-server">5.1.1</a></li> </ol> -<p>Also, we can use <code>docker-compose</code> to deploy, with <code>docker-compose up -d</code>. Here is an example <code>docker-compose.yml</code>:</p> +<p>Also, if we want to manage the other Hugegraph related instances in one file, we can use <code>docker-compose</code> to deploy, with the command <code>docker-compose up -d</code> (you can config only <code>server</code>). Here is an example <code>docker-compose.yml</code>:</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#204a87;font-weight:bold">version</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#4e9a06">&#39;3&#39;</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"></span><span style="color:#204a87;font-weight:bold">services</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">graph</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">image</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">hugegraph/hugegraph</span><span style="color:#f8f8f8;text-decoration:underline"> -</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic">#environment:</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic"># environment:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic"># - PRELOAD=true</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic"># PRELOAD is a option to preload a build-in sample graph when initializing.</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">ports</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> -</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">18080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span></code></pre></div><h4 id="32-download-the-binary-tar-tarball">3.2 Download the binary tar tarball</h4> <p>You could download the binary tarball from the download page of ASF site like this:</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#8f5902;font-style:italic"># use the latest version, here is 1.0.0 for example</span> @@ -1530,11 +1531,11 @@ for detailed configuration introduction, please refer to <a href="/docs/confi <ol> <li> <p>Use <code>docker run</code></p> -<p>Use <code>docker run -itd --name=graph -p 18080:8080 -e PRELOAD=true hugegraph/hugegraph:latest</code></p> +<p>Use <code>docker run -itd --name=graph -p 8080:8080 -e PRELOAD=true hugegraph/hugegraph:latest</code></p> </li> <li> <p>Use <code>docker-compose</code></p> -<p>Create <code>docker-compose.yml</code> as following</p> +<p>Create <code>docker-compose.yml</code> as following. We should set the environment variable <code>PRELOAD=true</code>. <a href="https://github.com/apache/incubator-hugegraph/blob/master/hugegraph-dist/src/assembly/static/scripts/example.groovy"><code>example.groovy</code></a> is a predefined script to preload the sample data. If needed, we can mount a new <code>example.groovy</code> to change the preload data.</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#204a87;font-weight:bold">version</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#4e9a06">&#39;3&#39;</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">services</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">graph</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> @@ -1543,7 +1544,7 @@ for detailed configuration introduction, please refer to <a href="/docs/confi </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">environment</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#000">PRELOAD=true</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">ports</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> -</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">18080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span></code></pre></div><p>Use <code>docker-compose up -d</code> to start the container</p> </li> </ol> @@ -1585,7 +1586,7 @@ after the service is stopped artificially, or when the service needs to be start </span></span><span style="display:flex;"><span>serializer=binary </span></span><span style="display:flex;"><span>rocksdb.data_path=. </span></span><span style="display:flex;"><span>rocksdb.wal_path=. -</span></span></code></pre></div><p>Initialize the database (required only on first startup)</p> +</span></span></code></pre></div><p>Initialize the database (required on first startup or a new configuration was manually added under &lsquo;conf/graphs/&rsquo;)</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span>bin/init-store.sh </span></span></code></pre></div><p>Start server</p> @@ -1613,7 +1614,7 @@ after the service is stopped artificially, or when the service needs to be start </span></span><span style="display:flex;"><span> </span></span><span style="display:flex;"><span>#cassandra.keyspace.strategy=SimpleStrategy </span></span><span style="display:flex;"><span>#cassandra.keyspace.replication=3 -</span></span></code></pre></div><p>Initialize the database (required only on first startup)</p> +</span></span></code></pre></div><p>Initialize the database (required on first startup or a new configuration was manually added under &lsquo;conf/graphs/&rsquo;)</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span>bin/init-store.sh </span></span><span style="display:flex;"><span>Initing HugeGraph Store... @@ -1659,7 +1660,7 @@ after the service is stopped artificially, or when the service needs to be start </span></span><span style="display:flex;"><span>#cassandra.keyspace.strategy=SimpleStrategy </span></span><span style="display:flex;"><span>#cassandra.keyspace.replication=3 </span></span></code></pre></div><p>Since the scylladb database itself is an &ldquo;optimized version&rdquo; based on cassandra, if the user does not have scylladb installed, they can also use cassandra as the backend storage directly. They only need to change the backend and serializer to scylladb, and the host and post point to the seeds and port of the cassandra cluster. Yes, but it is not recommended to do so, it will not take advantage of scylladb itself.</p> -<p>Initialize the database (required only on first startup)</p> +<p>Initialize the database (required on first startup or a new configuration was manually added under &lsquo;conf/graphs/&rsquo;)</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span>bin/init-store.sh </span></span></code></pre></div><p>Start server</p> @@ -1685,7 +1686,7 @@ after the service is stopped artificially, or when the service needs to be start </span></span><span style="display:flex;"><span>#hbase.enable_partition=true </span></span><span style="display:flex;"><span>#hbase.vertex_partitions=10 </span></span><span style="display:flex;"><span>#hbase.edge_partitions=30 -</span></span></code></pre></div><p>Initialize the database (required only on first startup)</p> +</span></span></code></pre></div><p>Initialize the database (required on first startup or a new configuration was manually added under &lsquo;conf/graphs/&rsquo;)</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span>bin/init-store.sh </span></span></code></pre></div><p>Start server</p> @@ -1717,7 +1718,7 @@ after the service is stopped artificially, or when the service needs to be start </span></span><span style="display:flex;"><span>jdbc.reconnect_max_times=3 </span></span><span style="display:flex;"><span>jdbc.reconnect_interval=3 </span></span><span style="display:flex;"><span>jdbc.ssl_mode=false -</span></span></code></pre></div><p>Initialize the database (required only on first startup)</p> +</span></span></code></pre></div><p>Initialize the database (required on first startup or a new configuration was manually added under &lsquo;conf/graphs/&rsquo;)</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span>bin/init-store.sh </span></span></code></pre></div><p>Start server</p> @@ -1817,7 +1818,12 @@ after the service is stopped artificially, or when the service needs to be start </span></span><span style="display:flex;"><span> ... </span></span><span style="display:flex;"><span> ] </span></span><span style="display:flex;"><span>} -</span></span></code></pre></div><p>For detailed API, please refer to <a href="/docs/clients/restful-api">RESTful-API</a></p> +</span></span></code></pre></div><p id="swaggerui-example"></p> +<p>For detailed API, please refer to <a href="/docs/clients/restful-api">RESTful-API</a></p> +<p>You can also visit <code>localhost:8080/swagger-ui/index.html</code> to check the API.</p> +<div style="text-align: center;"> +<img src="/docs/images/images-server/621swaggerui示例.png" alt="image"> +</div> <h3 id="7-stop-server">7 Stop Server</h3> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#000">$cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span><span style="color:#000">$bin</span>/stop-hugegraph.sh @@ -7342,7 +7348,8 @@ target directory. Copy the Jar package to the <code>plugins</code> directo </ul> <h4 id="21-use-docker-recommended">2.1 Use docker (recommended)</h4> <blockquote> -<p><strong>Special Note</strong>: If you are starting <code>hubble</code> with Docker, and <code>hubble</code> and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to <code>localhost/127.0.0.1</code>. This will refer to the <code>hubble</code> container internally rather than the host machine, resulting in a connection failure to the server. If <code>hubble</code> and <code>server</code> is in the same docker network, you can use the <code>container_name</code> as the hostname, and <code>8080</code> as the port. Or you can use the ip of the host as the hostname, and the port is configured by the host for the server.</p> +<p><strong>Special Note</strong>: If you are starting <code>hubble</code> with Docker, and <code>hubble</code> and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to <code>localhost/127.0.0.1</code>. This will refer to the <code>hubble</code> container internally rather than the host machine, resulting in a connection failure to the server.</p> +<p>If <code>hubble</code> and <code>server</code> is in the same docker network, we <strong>recommend</strong> using the <code>container_name</code> (in our example, it is <code>graph</code>) as the hostname, and <code>8080</code> as the port. Or you can use the <strong>host IP</strong> as the hostname, and the port is configured by the host for the server.</p> </blockquote> <p>We can use <code>docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble</code> to quick start <a href="https://hub.docker.com/r/hugegraph/hubble">hubble</a>.</p> <p>Alternatively, you can use Docker Compose to start <code>hubble</code>. Additionally, if <code>hubble</code> and the graph are in the same Docker network, you can access the graph using the container name of the graph, eliminating the need for the host machine&rsquo;s IP address.</p> @@ -7353,7 +7360,7 @@ target directory. Copy the Jar package to the <code>plugins</code> directo </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">image</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">hugegraph/hugegraph</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">container_name</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">graph</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">ports</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> -</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">18080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">hubble</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">image</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">hugegraph/hubble</span><span style="color:#f8f8f8;text-decoration:underline"> @@ -7408,10 +7415,13 @@ target directory. Copy the Jar package to the <code>plugins</code> directo <div style="text-align: center;"> <img src="/docs/images/images-hubble/311图创建.png" alt="image"> </div> -<p>Create graph by filling in the content as follows::</p> +<p>Create graph by filling in the content as follows:</p> <center> <img src="/docs/images/images-hubble/311图创建2.png" alt="image"> </center> +<blockquote> +<p><strong>Special Note</strong>: If you are starting <code>hubble</code> with Docker, and <code>hubble</code> and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to <code>localhost/127.0.0.1</code>. If <code>hubble</code> and <code>server</code> is in the same docker network, we <strong>recommend</strong> using the <code>container_name</code> (in our example, it is <code>graph</code>) as the hostname, and <code>8080</code> as the port. Or you can use the <strong>host IP</strong> as the hostname, and the port is configured by the host for the server.</p> +</blockquote> <h5 id="412graph-access">4.1.2 Graph Access</h5> <p>Realize the information access of the graph space. After entering, you can perform operations such as multidimensional query analysis, metadata management, data import, and algorithm analysis of the graph.</p> <center> @@ -7501,7 +7511,7 @@ target directory. Copy the Jar package to the <code>plugins</code> directo <center> <img src="/docs/images/images-hubble/3241边创建.png" alt="image"> </center> -<p>Graph mode:</p> +<p>Graph mode:</p> <center> <img src="/docs/images/images-hubble/3241边创建2.png" alt="image"> </center> @@ -7518,6 +7528,9 @@ target directory. Copy the Jar package to the <code>plugins</code> directo <h5 id="425-index-types">4.2.5 Index Types</h5> <p>Displays vertex and edge indices for vertex types and edge types.</p> <h4 id="43-data-import">4.3 Data Import</h4> +<blockquote> +<p><strong>Note</strong>:currently, we recommend to use <a href="/en/docs/quickstart/hugegraph-loader">hugegraph-loader</a> to import data formally. The built-in import of <code>hubble</code> is used for <strong>testing</strong> and <strong>getting started</strong>.</p> +</blockquote> <p>The usage process of data import is as follows:</p> <center> <img src="/docs/images/images-hubble/33导入流程.png" alt="image"> diff --git a/docs/quickstart/_print/index.html b/docs/quickstart/_print/index.html index 078ad711a..275b50155 100644 --- a/docs/quickstart/_print/index.html +++ b/docs/quickstart/_print/index.html @@ -1,13 +1,14 @@ Quick Start | HugeGraph

    1 - HugeGraph-Server Quick Start

    1 HugeGraph-Server Overview

    HugeGraph-Server is the core part of the HugeGraph Project, contains submodules such as Core, Backend, API.

    The Core Module is an implementation of the Tinkerpop interface; The Backend module is used to save the graph data to the data store, currently supported backends include: Memory, Cassandra, ScyllaDB, RocksDB; The API Module provides HTTP Server, which converts Client’s HTTP request into a call to Core Module.

    There will be two spellings HugeGraph-Server and HugeGraphServer in the document, and other modules are similar. There is no big difference in the meaning of these two ways of writing, which can be distinguished as follows: HugeGraph-Server represents the code of server-related components, HugeGraphServer represents the service process.

    2 Dependency for Building/Running

    2.1 Install Java 11 (JDK 11)

    Consider use Java 11 to run HugeGraph-Server (also compatible with Java 8 now), and configure by yourself.

    Be sure to execute the java -version command to check the jdk version before reading

    3 Deploy

    There are four ways to deploy HugeGraph-Server components:

    • Method 1: Use Docker container (recommended)
    • Method 2: Download the binary tarball
    • Method 3: Source code compilation
    • Method 4: One-click deployment

    You can refer to Docker deployment guide.

    We can use docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph to quickly start an inner HugeGraph server with RocksDB in background.

    Optional:

    1. use docker exec -it graph bash to enter the container to do some operations.
    2. use docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph to start with a built-in example graph.

    Also, we can use docker-compose to deploy, with docker-compose up -d. Here is an example docker-compose.yml:

    Quick Start

    1 - HugeGraph-Server Quick Start

    1 HugeGraph-Server Overview

    HugeGraph-Server is the core part of the HugeGraph Project, contains submodules such as Core, Backend, API.

    The Core Module is an implementation of the Tinkerpop interface; The Backend module is used to save the graph data to the data store, currently supported backends include: Memory, Cassandra, ScyllaDB, RocksDB; The API Module provides HTTP Server, which converts Client’s HTTP request into a call to Core Module.

    There will be two spellings HugeGraph-Server and HugeGraphServer in the document, and other modules are similar. There is no big difference in the meaning of these two ways of writing, which can be distinguished as follows: HugeGraph-Server represents the code of server-related components, HugeGraphServer represents the service process.

    2 Dependency for Building/Running

    2.1 Install Java 11 (JDK 11)

    Consider use Java 11 to run HugeGraph-Server (also compatible with Java 8 now), and configure by yourself.

    Be sure to execute the java -version command to check the jdk version before reading

    3 Deploy

    There are four ways to deploy HugeGraph-Server components:

    • Method 1: Use Docker container (recommended)
    • Method 2: Download the binary tarball
    • Method 3: Source code compilation
    • Method 4: One-click deployment

    You can refer to Docker deployment guide.

    We can use docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph to quickly start an inner HugeGraph server with RocksDB in background.

    Optional:

    1. use docker exec -it graph bash to enter the container to do some operations.
    2. use docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph to start with a built-in example graph. We can use RESTful API to verify the result. The detailed step can refer to 5.1.1

    Also, if we want to manage the other Hugegraph related instances in one file, we can use docker-compose to deploy, with the command docker-compose up -d (you can config only server). Here is an example docker-compose.yml:

    version: '3'
     services:
       graph:
         image: hugegraph/hugegraph
    -    #environment:
    +    # environment:
         #  - PRELOAD=true
    +    # PRELOAD is a option to preload a build-in sample graph when initializing.
         ports:
    -      - 18080:8080
    +      - 8080:8080
     

    3.2 Download the binary tar tarball

    You could download the binary tarball from the download page of ASF site like this:

    # use the latest version, here is 1.0.0 for example
     wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz
     tar zxf *hugegraph*.tar.gz
    @@ -56,7 +57,7 @@
     cd *hugegraph*/*tool* 
     

    note: ${version} is the version, The latest version can refer to Download Page, or click the link to download directly from the Download page

    The general entry script for HugeGraph-Tools is bin/hugegraph, Users can use the help command to view its usage, here only the commands for one-click deployment are introduced.

    bin/hugegraph deploy -v {hugegraph-version} -p {install-path} [-u {download-path-prefix}]
     

    {hugegraph-version} indicates the version of HugeGraphServer and HugeGraphStudio to be deployed, users can view the conf/version-mapping.yaml file for version information, {install-path} specify the installation directory of HugeGraphServer and HugeGraphStudio, {download-path-prefix} optional, specify the download address of HugeGraphServer and HugeGraphStudio tarball, use default download URL if not provided, for example, to start HugeGraph-Server and HugeGraphStudio version 0.6, write the above command as bin/hugegraph deploy -v 0.6 -p services.

    4 Config

    If you need to quickly start HugeGraph just for testing, then you only need to modify a few configuration items (see next section). -for detailed configuration introduction, please refer to configuration document and introduction to configuration items

    5 Startup

    5.1 Use Docker to startup

    In 3.1 Use Docker container, we have introduced how to use docker to deploy hugegraph-server. server can also preload an example graph by setting the parameter.

    5.1.1 Create example graph when starting server

    Set the environment variable PRELOAD=true when starting Docker in order to load data during the execution of the startup script.

    1. Use docker run

      Use docker run -itd --name=graph -p 18080:8080 -e PRELOAD=true hugegraph/hugegraph:latest

    2. Use docker-compose

      Create docker-compose.yml as following

      version: '3'
      +for detailed configuration introduction, please refer to configuration document and introduction to configuration items

      5 Startup

      5.1 Use Docker to startup

      In 3.1 Use Docker container, we have introduced how to use docker to deploy hugegraph-server. server can also preload an example graph by setting the parameter.

      5.1.1 Create example graph when starting server

      Set the environment variable PRELOAD=true when starting Docker in order to load data during the execution of the startup script.

      1. Use docker run

        Use docker run -itd --name=graph -p 8080:8080 -e PRELOAD=true hugegraph/hugegraph:latest

      2. Use docker-compose

        Create docker-compose.yml as following. We should set the environment variable PRELOAD=true. example.groovy is a predefined script to preload the sample data. If needed, we can mount a new example.groovy to change the preload data.

        version: '3'
           services:
             graph:
               image: hugegraph/hugegraph:latest
        @@ -64,7 +65,7 @@
               environment:
                 - PRELOAD=true
               ports:
        -        - 18080:8080
        +        - 8080:8080
         

        Use docker-compose up -d to start the container

      And use the RESTful API to request HugeGraphServer and get the following result:

      > curl "http://localhost:8080/graphs/hugegraph/graph/vertices" | gunzip
       
       {"vertices":[{"id":"2:lop","label":"software","type":"vertex","properties":{"name":"lop","lang":"java","price":328}},{"id":"1:josh","label":"person","type":"vertex","properties":{"name":"josh","age":32,"city":"Beijing"}},{"id":"1:marko","label":"person","type":"vertex","properties":{"name":"marko","age":29,"city":"Beijing"}},{"id":"1:peter","label":"person","type":"vertex","properties":{"name":"peter","age":35,"city":"Shanghai"}},{"id":"1:vadas","label":"person","type":"vertex","properties":{"name":"vadas","age":27,"city":"Hongkong"}},{"id":"2:ripple","label":"software","type":"vertex","properties":{"name":"ripple","lang":"java","price":199}}]}
      @@ -79,7 +80,7 @@
       serializer=binary
       rocksdb.data_path=.
       rocksdb.wal_path=.
      -

      Initialize the database (required only on first startup)

      cd *hugegraph-${version}
      +

      Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)

      cd *hugegraph-${version}
       bin/init-store.sh
       

      Start server

      bin/start-hugegraph.sh
       Starting HugeGraphServer...
      @@ -97,7 +98,7 @@
       
       #cassandra.keyspace.strategy=SimpleStrategy
       #cassandra.keyspace.replication=3
      -

      Initialize the database (required only on first startup)

      cd *hugegraph-${version}
      +

      Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)

      cd *hugegraph-${version}
       bin/init-store.sh
       Initing HugeGraph Store...
       2017-12-01 11:26:51 1424  [main] [INFO ] org.apache.hugegraph.HugeGraph [] - Opening backend store: 'cassandra'
      @@ -132,7 +133,7 @@
       
       #cassandra.keyspace.strategy=SimpleStrategy
       #cassandra.keyspace.replication=3
      -

      Since the scylladb database itself is an “optimized version” based on cassandra, if the user does not have scylladb installed, they can also use cassandra as the backend storage directly. They only need to change the backend and serializer to scylladb, and the host and post point to the seeds and port of the cassandra cluster. Yes, but it is not recommended to do so, it will not take advantage of scylladb itself.

      Initialize the database (required only on first startup)

      cd *hugegraph-${version}
      +

      Since the scylladb database itself is an “optimized version” based on cassandra, if the user does not have scylladb installed, they can also use cassandra as the backend storage directly. They only need to change the backend and serializer to scylladb, and the host and post point to the seeds and port of the cassandra cluster. Yes, but it is not recommended to do so, it will not take advantage of scylladb itself.

      Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)

      cd *hugegraph-${version}
       bin/init-store.sh
       

      Start server

      bin/start-hugegraph.sh
       Starting HugeGraphServer...
      @@ -148,7 +149,7 @@
       #hbase.enable_partition=true
       #hbase.vertex_partitions=10
       #hbase.edge_partitions=30
      -

      Initialize the database (required only on first startup)

      cd *hugegraph-${version}
      +

      Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)

      cd *hugegraph-${version}
       bin/init-store.sh
       

      Start server

      bin/start-hugegraph.sh
       Starting HugeGraphServer...
      @@ -166,7 +167,7 @@
       jdbc.reconnect_max_times=3
       jdbc.reconnect_interval=3
       jdbc.ssl_mode=false
      -

      Initialize the database (required only on first startup)

      cd *hugegraph-${version}
      +

      Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)

      cd *hugegraph-${version}
       bin/init-store.sh
       

      Start server

      bin/start-hugegraph.sh
       Starting HugeGraphServer...
      @@ -234,7 +235,7 @@
               ...
           ]
       }
      -

      For detailed API, please refer to RESTful-API

      7 Stop Server

      $cd *hugegraph-${version}
      +

      For detailed API, please refer to RESTful-API

      You can also visit localhost:8080/swagger-ui/index.html to check the API.

      image

      7 Stop Server

      $cd *hugegraph-${version}
       $bin/stop-hugegraph.sh
       

      8 Debug Server with IntelliJ IDEA

      Please refer to Setup Server in IDEA

      2 - HugeGraph-Loader Quick Start

      1 HugeGraph-Loader Overview

      HugeGraph-Loader is the data import component of HugeGraph, which can convert data from various data sources into graph vertices and edges and import them into the graph database in batches.

      Currently supported data sources include:

      • Local disk file or directory, supports TEXT, CSV and JSON format files, supports compressed files
      • HDFS file or directory, supports compressed files
      • Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server

      Local disk files and HDFS files support resumable uploads.

      It will be explained in detail below.

      Note: HugeGraph-Loader requires HugeGraph Server service, please refer to HugeGraph-Server Quick Start to download and start Server

      2 Get HugeGraph-Loader

      There are two ways to get HugeGraph-Loader:

      • Download the compiled tarball
      • Clone source code then compile and install

      2.1 Download the compiled archive

      Download the latest version of the HugeGraph-Toolchain release package:

      wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-toolchain-incubating-1.0.0.tar.gz
       tar zxf *hugegraph*.tar.gz
      @@ -712,13 +713,13 @@
       --deploy-mode cluster --name spark-hugegraph-loader --file ./hugegraph.json \
       --username admin --token admin --host xx.xx.xx.xx --port 8093 \
       --graph graph-test --num-executors 6 --executor-cores 16 --executor-memory 15g
      -

      3 - HugeGraph-Hubble Quick Start

      1 HugeGraph-Hubble Overview

      HugeGraph is an analysis-oriented graph database system that supports batch operations, which fully supports Apache TinkerPop3 framework and Gremlin graph query language. It provides a complete tool chain ecology such as export, backup, and recovery, and effectively solve the storage, query and correlation analysis needs of massive graph data. HugeGraph is widely used in the fields of risk control, insurance claims, recommendation search, public security crime crackdown, knowledge graph, network security, IT operation and maintenance of bank securities companies, and is committed to allowing more industries, organizations and users to enjoy a wider range of data comprehensive value.

      HugeGraph-Hubble is HugeGraph’s one-stop visual analysis platform. The platform covers the whole process from data modeling, to efficient data import, to real-time and offline analysis of data, and unified management of graphs, realizing the whole process wizard of graph application. It is designed to improve the user’s use fluency, lower the user’s use threshold, and provide a more efficient and easy-to-use user experience.

      The platform mainly includes the following modules:

      Graph Management

      The graph management module realizes the unified management of multiple graphs and graph access, editing, deletion, and query by creating graph and connecting the platform and graph data.

      Metadata Modeling

      The metadata modeling module realizes the construction and management of graph models by creating attribute libraries, vertex types, edge types, and index types. The platform provides two modes, list mode and graph mode, which can display the metadata model in real time, which is more intuitive. At the same time, it also provides a metadata reuse function across graphs, which saves the tedious and repetitive creation process of the same metadata, greatly improves modeling efficiency and enhances ease of use.

      Data Import

      Data import is to convert the user’s business data into the vertices and edges of the graph and insert it into the graph database. The platform provides a wizard-style visual import module. By creating import tasks, the management of import tasks and the parallel operation of multiple import tasks are realized. Improve import performance. After entering the import task, you only need to follow the platform step prompts, upload files as needed, and fill in the content to easily implement the import process of graph data. At the same time, it supports breakpoint resuming, error retry mechanism, etc., which reduces import costs and improves efficiency.

      Graph Analysis

      By inputting the graph traversal language Gremlin, high-performance general analysis of graph data can be realized, and functions such as customized multidimensional path query of vertices can be provided, and three kinds of graph result display methods are provided, including: graph form, table form, Json form, and multidimensional display. The data form meets the needs of various scenarios used by users. It provides functions such as running records and collection of common statements, realizing the traceability of graph operations, and the reuse and sharing of query input, which is fast and efficient. It supports the export of graph data, and the export format is Json format.

      Task Management

      For Gremlin tasks that need to traverse the whole graph, index creation and reconstruction and other time-consuming asynchronous tasks, the platform provides corresponding task management functions to achieve unified management and result viewing of asynchronous tasks.

      2 Deploy

      There are three ways to deplot hugegraph-hubble

      • Use Docker (recommended)
      • Download the Toolchain binary package
      • Source code compilation

      Special Note: If you are starting hubble with Docker, and hubble and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to localhost/127.0.0.1. This will refer to the hubble container internally rather than the host machine, resulting in a connection failure to the server. If hubble and server is in the same docker network, you can use the container_name as the hostname, and 8080 as the port. Or you can use the ip of the host as the hostname, and the port is configured by the host for the server.

      We can use docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble to quick start hubble.

      Alternatively, you can use Docker Compose to start hubble. Additionally, if hubble and the graph are in the same Docker network, you can access the graph using the container name of the graph, eliminating the need for the host machine’s IP address.

      Use docker-compose up -ddocker-compose.yml is following:

      version: '3'
      +

      3 - HugeGraph-Hubble Quick Start

      1 HugeGraph-Hubble Overview

      HugeGraph is an analysis-oriented graph database system that supports batch operations, which fully supports Apache TinkerPop3 framework and Gremlin graph query language. It provides a complete tool chain ecology such as export, backup, and recovery, and effectively solve the storage, query and correlation analysis needs of massive graph data. HugeGraph is widely used in the fields of risk control, insurance claims, recommendation search, public security crime crackdown, knowledge graph, network security, IT operation and maintenance of bank securities companies, and is committed to allowing more industries, organizations and users to enjoy a wider range of data comprehensive value.

      HugeGraph-Hubble is HugeGraph’s one-stop visual analysis platform. The platform covers the whole process from data modeling, to efficient data import, to real-time and offline analysis of data, and unified management of graphs, realizing the whole process wizard of graph application. It is designed to improve the user’s use fluency, lower the user’s use threshold, and provide a more efficient and easy-to-use user experience.

      The platform mainly includes the following modules:

      Graph Management

      The graph management module realizes the unified management of multiple graphs and graph access, editing, deletion, and query by creating graph and connecting the platform and graph data.

      Metadata Modeling

      The metadata modeling module realizes the construction and management of graph models by creating attribute libraries, vertex types, edge types, and index types. The platform provides two modes, list mode and graph mode, which can display the metadata model in real time, which is more intuitive. At the same time, it also provides a metadata reuse function across graphs, which saves the tedious and repetitive creation process of the same metadata, greatly improves modeling efficiency and enhances ease of use.

      Data Import

      Data import is to convert the user’s business data into the vertices and edges of the graph and insert it into the graph database. The platform provides a wizard-style visual import module. By creating import tasks, the management of import tasks and the parallel operation of multiple import tasks are realized. Improve import performance. After entering the import task, you only need to follow the platform step prompts, upload files as needed, and fill in the content to easily implement the import process of graph data. At the same time, it supports breakpoint resuming, error retry mechanism, etc., which reduces import costs and improves efficiency.

      Graph Analysis

      By inputting the graph traversal language Gremlin, high-performance general analysis of graph data can be realized, and functions such as customized multidimensional path query of vertices can be provided, and three kinds of graph result display methods are provided, including: graph form, table form, Json form, and multidimensional display. The data form meets the needs of various scenarios used by users. It provides functions such as running records and collection of common statements, realizing the traceability of graph operations, and the reuse and sharing of query input, which is fast and efficient. It supports the export of graph data, and the export format is Json format.

      Task Management

      For Gremlin tasks that need to traverse the whole graph, index creation and reconstruction and other time-consuming asynchronous tasks, the platform provides corresponding task management functions to achieve unified management and result viewing of asynchronous tasks.

      2 Deploy

      There are three ways to deplot hugegraph-hubble

      • Use Docker (recommended)
      • Download the Toolchain binary package
      • Source code compilation

      Special Note: If you are starting hubble with Docker, and hubble and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to localhost/127.0.0.1. This will refer to the hubble container internally rather than the host machine, resulting in a connection failure to the server.

      If hubble and server is in the same docker network, we recommend using the container_name (in our example, it is graph) as the hostname, and 8080 as the port. Or you can use the host IP as the hostname, and the port is configured by the host for the server.

      We can use docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble to quick start hubble.

      Alternatively, you can use Docker Compose to start hubble. Additionally, if hubble and the graph are in the same Docker network, you can access the graph using the container name of the graph, eliminating the need for the host machine’s IP address.

      Use docker-compose up -ddocker-compose.yml is following:

      version: '3'
       services:
         server:
           image: hugegraph/hugegraph
           container_name: graph
           ports:
      -      - 18080:8080
      +      - 8080:8080
       
         hubble:
           image: hugegraph/hubble
      @@ -749,7 +750,7 @@
       mvn -e compile package -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -ntp
       cd apache-hugegraph-hubble-incubating*
       

      Run hubble

      bin/start-hubble.sh -d
      -

      3 Platform Workflow

      The module usage process of the platform is as follows:

      image

      4 Platform Instructions

      4.1 Graph Management

      4.1.1 Graph creation

      Under the graph management module, click [Create graph], and realize the connection of multiple graphs by filling in the graph ID, graph name, host name, port number, username, and password information.

      image

      Create graph by filling in the content as follows::

      image
      4.1.2 Graph Access

      Realize the information access of the graph space. After entering, you can perform operations such as multidimensional query analysis, metadata management, data import, and algorithm analysis of the graph.

      image
      4.1.3 Graph management
      1. Users can achieve unified management of graphs through overview, search, and information editing and deletion of single graphs.
      2. Search range: You can search for the graph name and ID.
      image

      4.2 Metadata Modeling (list + graph mode)

      4.2.1 Module entry

      Left navigation:

      image
      4.2.2 Property type
      4.2.2.1 Create type
      1. Fill in or select the attribute name, data type, and cardinality to complete the creation of the attribute.
      2. Created attributes can be used as attributes of vertex type and edge type.

      List mode:

      image

      Graph mode:

      image
      4.2.2.2 Reuse
      1. The platform provides the [Reuse] function, which can directly reuse the metadata of other graphs.
      2. Select the graph ID that needs to be reused, and continue to select the attributes that need to be reused. After that, the platform will check whether there is a conflict. After passing, the metadata can be reused.

      Select reuse items:

      image

      Check reuse items:

      image
      4.2.2.3 Management
      1. You can delete a single item or delete it in batches in the attribute list.
      4.2.3 Vertex type
      4.2.3.1 Create type
      1. Fill in or select the vertex type name, ID strategy, association attribute, primary key attribute, vertex style, content displayed below the vertex in the query result, and index information: including whether to create a type index, and the specific content of the attribute index, complete the vertex Type creation.

      List mode:

      image

      Graph mode:

      image
      4.2.3.2 Reuse
      1. The multiplexing of vertex types will reuse the attributes and attribute indexes associated with this type together.
      2. The reuse method is similar to the property reuse, see 3.2.2.2.
      4.2.3.3 Administration
      1. Editing operations are available. The vertex style, association type, vertex display content, and attribute index can be edited, and the rest cannot be edited.

      2. You can delete a single item or delete it in batches.

      image
      4.2.4 Edge Types
      4.2.4.1 Create
      1. Fill in or select the edge type name, start point type, end point type, associated attributes, whether to allow multiple connections, edge style, content displayed below the edge in the query result, and index information: including whether to create a type index, and attribute index The specific content, complete the creation of the edge type.

      List mode:

      image

      Graph mode:

      image
      4.2.4.2 Reuse
      1. The reuse of the edge type will reuse the start point type, end point type, associated attribute and attribute index of this type.
      2. The reuse method is similar to the property reuse, see 3.2.2.2.
      4.2.4.3 Administration
      1. Editing operations are available. Edge styles, associated attributes, edge display content, and attribute indexes can be edited, and the rest cannot be edited, the same as the vertex type.
      2. You can delete a single item or delete it in batches.
      4.2.5 Index Types

      Displays vertex and edge indices for vertex types and edge types.

      4.3 Data Import

      The usage process of data import is as follows:

      image
      4.3.1 Module entrance

      Left navigation:

      image
      4.3.2 Create task
      1. Fill in the task name and remarks (optional) to create an import task.
      2. Multiple import tasks can be created and imported in parallel.
      image
      4.3.3 Uploading files
      1. Upload the file that needs to be composed. The currently supported format is CSV, which will be updated continuously in the future.
      2. Multiple files can be uploaded at the same time.
      image
      4.3.4 Setting up data mapping
      1. Set up data mapping for uploaded files, including file settings and type settings

      2. File settings: Check or fill in whether to include the header, separator, encoding format and other settings of the file itself, all set the default values, no need to fill in manually

      3. Type setting:

        1. Vertex map and edge map:

          【Vertex Type】: Select the vertex type, and upload the column data in the file for its ID mapping;

          【Edge Type】: Select the edge type and map the column data of the uploaded file to the ID column of its start point type and end point type;

        2. Mapping settings: upload the column data in the file for the attribute mapping of the selected vertex type. Here, if the attribute name is the same as the header name of the file, the mapping attribute can be automatically matched, and there is no need to manually fill in the selection.

        3. After completing the setting, the setting list will be displayed before proceeding to the next step. It supports the operations of adding, editing and deleting mappings.

      Fill in the settings map:

      image

      Mapping list:

      image
      4.3.5 Import data

      Before importing, you need to fill in the import setting parameters. After filling in, you can start importing data into the gallery.

      1. Import settings
      • The import setting parameter items are as shown in the figure below, all set the default value, no need to fill in manually
      image
      1. Import details
      • Click Start Import to start the file import task
      • The import details provide the mapping type, import speed, import progress, time-consuming and the specific status of the current task set for each uploaded file, and can pause, resume, stop and other operations for each task
      • If the import fails, you can view the specific reason
      image

      4.4 Data Analysis

      4.4.1 Module entry

      Left navigation:

      image
      4.4.2 Multi-image switching

      By switching the entrance on the left, flexibly switch the operation space of multiple graphs

      image
      4.4.3 Graph Analysis and Processing

      HugeGraph supports Gremlin, a graph traversal query language of Apache TinkerPop3. Gremlin is a general graph database query language. By entering Gremlin statements and clicking execute, you can perform query and analysis operations on graph data, and create and delete vertices/edges. , vertex/edge attribute modification, etc.

      After Gremlin query, below is the graph result display area, which provides 3 kinds of graph result display modes: [Graph Mode], [Table Mode], [Json Mode].

      Support zoom, center, full screen, export and other operations.

      【Picture Mode】

      image

      【Table mode】

      image

      【Json mode】

      image
      4.4.4 Data Details

      Click the vertex/edge entity to view the data details of the vertex/edge, including: vertex/edge type, vertex ID, attribute and corresponding value, expand the information display dimension of the graph, and improve the usability.

      4.4.5 Multidimensional Path Query of Graph Results

      In addition to the global query, in-depth customized query and hidden operations can be performed for the vertices in the query result to realize customized mining of graph results.

      Right-click a vertex, and the menu entry of the vertex appears, which can be displayed, inquired, hidden, etc.

      • Expand: Click to display the vertices associated with the selected point.
      • Query: By selecting the edge type and edge direction associated with the selected point, and then selecting its attributes and corresponding filtering rules under this condition, a customized path display can be realized.
      • Hide: When clicked, hides the selected point and its associated edges.

      Double-clicking a vertex also displays the vertex associated with the selected point.

      image
      4.4.6 Add vertex/edge
      4.4.6.1 Added vertex

      In the graph area, two entries can be used to dynamically add vertices, as follows:

      1. Click on the graph area panel, the Add Vertex entry appears
      2. Click the first icon in the action bar in the upper right corner

      Complete the addition of vertices by selecting or filling in the vertex type, ID value, and attribute information.

      The entry is as follows:

      image

      Add the vertex content as follows:

      image
      4.4.6.2 Add edge

      Right-click a vertex in the graph result to add the outgoing or incoming edge of that point.

      4.4.7 Execute the query of records and favorites
      1. Record each query record at the bottom of the graph area, including: query time, execution type, content, status, time-consuming, as well as [collection] and [load] operations, to achieve a comprehensive record of graph execution, with traces to follow, and Can quickly load and reuse execution content
      2. Provides the function of collecting sentences, which can be used to collect frequently used sentences, which is convenient for fast calling of high-frequency sentences.
      image

      4.5 Task Management

      4.5.1 Module entry

      Left navigation:

      image
      4.5.2 Task Management
      1. Provide unified management and result viewing of asynchronous tasks. There are 4 types of asynchronous tasks, namely:
      • gremlin: Gremlin tasks
      • algorithm: OLAP algorithm task
      • remove_schema: remove metadata
      • rebuild_index: rebuild the index
      1. The list displays the asynchronous task information of the current graph, including: task ID, task name, task type, creation time, time-consuming, status, operation, and realizes the management of asynchronous tasks.
      2. Support filtering by task type and status
      3. Support searching for task ID and task name
      4. Asynchronous tasks can be deleted or deleted in batches
      image
      4.5.3 Gremlin asynchronous tasks
      1. Create a task
      • The data analysis module currently supports two Gremlin operations, Gremlin query and Gremlin task; if the user switches to the Gremlin task, after clicking execute, an asynchronous task will be created in the asynchronous task center;
      1. Task submission
      • After the task is submitted successfully, the graph area returns the submission result and task ID
      1. Mission details
      • Provide [View] entry, you can jump to the task details to view the specific execution of the current task After jumping to the task center, the currently executing task line will be displayed directly
      image

      Click to view the entry to jump to the task management list, as follows:

      image
      1. View the results
      • The results are displayed in the form of json
      4.5.4 OLAP algorithm tasks

      There is no visual OLAP algorithm execution on Hubble. You can call the RESTful API to perform OLAP algorithm tasks, find the corresponding tasks by ID in the task management, and view the progress and results.

      4.5.5 Delete metadata, rebuild index
      1. Create a task
      • In the metadata modeling module, when deleting metadata, an asynchronous task for deleting metadata can be created
      image
      • When editing an existing vertex/edge type operation, when adding an index, an asynchronous task of creating an index can be created
      image
      1. Task details
      • After confirming/saving, you can jump to the task center to view the details of the current task
      image

      4 - HugeGraph-Client Quick Start

      1 Overview Of Hugegraph

      HugeGraph-Client sends HTTP request to HugeGraph-Server to obtain and parse the execution result of Server. Currently only the HugeGraph-Client for Java is provided. You can use HugeGraph-Client to write Java code to operate HugeGraph, such as adding, deleting, modifying, and querying schema and graph data, or executing gremlin statements.

      2 What You Need

      • Java 11 (also support Java 8)
      • Maven 3.5+

      3 How To Use

      The basic steps to use HugeGraph-Client are as follows:

      • Build a new Maven project by IDEA or Eclipse
      • Add HugeGraph-Client dependency in pom file;
      • Create an object to invoke the interface of HugeGraph-Client

      See the complete example in the following section for the detail.

      4 Complete Example

      4.1 Build New Maven Project

      Using IDEA or Eclipse to create the project:

      4.2 Add Hugegraph-Client Dependency In POM

      <dependencies>
      +

      3 Platform Workflow

      The module usage process of the platform is as follows:

      image

      4 Platform Instructions

      4.1 Graph Management

      4.1.1 Graph creation

      Under the graph management module, click [Create graph], and realize the connection of multiple graphs by filling in the graph ID, graph name, host name, port number, username, and password information.

      image

      Create graph by filling in the content as follows:

      image

      Special Note: If you are starting hubble with Docker, and hubble and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to localhost/127.0.0.1. If hubble and server is in the same docker network, we recommend using the container_name (in our example, it is graph) as the hostname, and 8080 as the port. Or you can use the host IP as the hostname, and the port is configured by the host for the server.

      4.1.2 Graph Access

      Realize the information access of the graph space. After entering, you can perform operations such as multidimensional query analysis, metadata management, data import, and algorithm analysis of the graph.

      image
      4.1.3 Graph management
      1. Users can achieve unified management of graphs through overview, search, and information editing and deletion of single graphs.
      2. Search range: You can search for the graph name and ID.
      image

      4.2 Metadata Modeling (list + graph mode)

      4.2.1 Module entry

      Left navigation:

      image
      4.2.2 Property type
      4.2.2.1 Create type
      1. Fill in or select the attribute name, data type, and cardinality to complete the creation of the attribute.
      2. Created attributes can be used as attributes of vertex type and edge type.

      List mode:

      image

      Graph mode:

      image
      4.2.2.2 Reuse
      1. The platform provides the [Reuse] function, which can directly reuse the metadata of other graphs.
      2. Select the graph ID that needs to be reused, and continue to select the attributes that need to be reused. After that, the platform will check whether there is a conflict. After passing, the metadata can be reused.

      Select reuse items:

      image

      Check reuse items:

      image
      4.2.2.3 Management
      1. You can delete a single item or delete it in batches in the attribute list.
      4.2.3 Vertex type
      4.2.3.1 Create type
      1. Fill in or select the vertex type name, ID strategy, association attribute, primary key attribute, vertex style, content displayed below the vertex in the query result, and index information: including whether to create a type index, and the specific content of the attribute index, complete the vertex Type creation.

      List mode:

      image

      Graph mode:

      image
      4.2.3.2 Reuse
      1. The multiplexing of vertex types will reuse the attributes and attribute indexes associated with this type together.
      2. The reuse method is similar to the property reuse, see 3.2.2.2.
      4.2.3.3 Administration
      1. Editing operations are available. The vertex style, association type, vertex display content, and attribute index can be edited, and the rest cannot be edited.

      2. You can delete a single item or delete it in batches.

      image
      4.2.4 Edge Types
      4.2.4.1 Create
      1. Fill in or select the edge type name, start point type, end point type, associated attributes, whether to allow multiple connections, edge style, content displayed below the edge in the query result, and index information: including whether to create a type index, and attribute index The specific content, complete the creation of the edge type.

      List mode:

      image

      Graph mode:

      image
      4.2.4.2 Reuse
      1. The reuse of the edge type will reuse the start point type, end point type, associated attribute and attribute index of this type.
      2. The reuse method is similar to the property reuse, see 3.2.2.2.
      4.2.4.3 Administration
      1. Editing operations are available. Edge styles, associated attributes, edge display content, and attribute indexes can be edited, and the rest cannot be edited, the same as the vertex type.
      2. You can delete a single item or delete it in batches.
      4.2.5 Index Types

      Displays vertex and edge indices for vertex types and edge types.

      4.3 Data Import

      Note:currently, we recommend to use hugegraph-loader to import data formally. The built-in import of hubble is used for testing and getting started.

      The usage process of data import is as follows:

      image
      4.3.1 Module entrance

      Left navigation:

      image
      4.3.2 Create task
      1. Fill in the task name and remarks (optional) to create an import task.
      2. Multiple import tasks can be created and imported in parallel.
      image
      4.3.3 Uploading files
      1. Upload the file that needs to be composed. The currently supported format is CSV, which will be updated continuously in the future.
      2. Multiple files can be uploaded at the same time.
      image
      4.3.4 Setting up data mapping
      1. Set up data mapping for uploaded files, including file settings and type settings

      2. File settings: Check or fill in whether to include the header, separator, encoding format and other settings of the file itself, all set the default values, no need to fill in manually

      3. Type setting:

        1. Vertex map and edge map:

          【Vertex Type】: Select the vertex type, and upload the column data in the file for its ID mapping;

          【Edge Type】: Select the edge type and map the column data of the uploaded file to the ID column of its start point type and end point type;

        2. Mapping settings: upload the column data in the file for the attribute mapping of the selected vertex type. Here, if the attribute name is the same as the header name of the file, the mapping attribute can be automatically matched, and there is no need to manually fill in the selection.

        3. After completing the setting, the setting list will be displayed before proceeding to the next step. It supports the operations of adding, editing and deleting mappings.

      Fill in the settings map:

      image

      Mapping list:

      image
      4.3.5 Import data

      Before importing, you need to fill in the import setting parameters. After filling in, you can start importing data into the gallery.

      1. Import settings
      • The import setting parameter items are as shown in the figure below, all set the default value, no need to fill in manually
      image
      1. Import details
      • Click Start Import to start the file import task
      • The import details provide the mapping type, import speed, import progress, time-consuming and the specific status of the current task set for each uploaded file, and can pause, resume, stop and other operations for each task
      • If the import fails, you can view the specific reason
      image

      4.4 Data Analysis

      4.4.1 Module entry

      Left navigation:

      image
      4.4.2 Multi-image switching

      By switching the entrance on the left, flexibly switch the operation space of multiple graphs

      image
      4.4.3 Graph Analysis and Processing

      HugeGraph supports Gremlin, a graph traversal query language of Apache TinkerPop3. Gremlin is a general graph database query language. By entering Gremlin statements and clicking execute, you can perform query and analysis operations on graph data, and create and delete vertices/edges. , vertex/edge attribute modification, etc.

      After Gremlin query, below is the graph result display area, which provides 3 kinds of graph result display modes: [Graph Mode], [Table Mode], [Json Mode].

      Support zoom, center, full screen, export and other operations.

      【Picture Mode】

      image

      【Table mode】

      image

      【Json mode】

      image
      4.4.4 Data Details

      Click the vertex/edge entity to view the data details of the vertex/edge, including: vertex/edge type, vertex ID, attribute and corresponding value, expand the information display dimension of the graph, and improve the usability.

      4.4.5 Multidimensional Path Query of Graph Results

      In addition to the global query, in-depth customized query and hidden operations can be performed for the vertices in the query result to realize customized mining of graph results.

      Right-click a vertex, and the menu entry of the vertex appears, which can be displayed, inquired, hidden, etc.

      • Expand: Click to display the vertices associated with the selected point.
      • Query: By selecting the edge type and edge direction associated with the selected point, and then selecting its attributes and corresponding filtering rules under this condition, a customized path display can be realized.
      • Hide: When clicked, hides the selected point and its associated edges.

      Double-clicking a vertex also displays the vertex associated with the selected point.

      image
      4.4.6 Add vertex/edge
      4.4.6.1 Added vertex

      In the graph area, two entries can be used to dynamically add vertices, as follows:

      1. Click on the graph area panel, the Add Vertex entry appears
      2. Click the first icon in the action bar in the upper right corner

      Complete the addition of vertices by selecting or filling in the vertex type, ID value, and attribute information.

      The entry is as follows:

      image

      Add the vertex content as follows:

      image
      4.4.6.2 Add edge

      Right-click a vertex in the graph result to add the outgoing or incoming edge of that point.

      4.4.7 Execute the query of records and favorites
      1. Record each query record at the bottom of the graph area, including: query time, execution type, content, status, time-consuming, as well as [collection] and [load] operations, to achieve a comprehensive record of graph execution, with traces to follow, and Can quickly load and reuse execution content
      2. Provides the function of collecting sentences, which can be used to collect frequently used sentences, which is convenient for fast calling of high-frequency sentences.
      image

      4.5 Task Management

      4.5.1 Module entry

      Left navigation:

      image
      4.5.2 Task Management
      1. Provide unified management and result viewing of asynchronous tasks. There are 4 types of asynchronous tasks, namely:
      • gremlin: Gremlin tasks
      • algorithm: OLAP algorithm task
      • remove_schema: remove metadata
      • rebuild_index: rebuild the index
      1. The list displays the asynchronous task information of the current graph, including: task ID, task name, task type, creation time, time-consuming, status, operation, and realizes the management of asynchronous tasks.
      2. Support filtering by task type and status
      3. Support searching for task ID and task name
      4. Asynchronous tasks can be deleted or deleted in batches
      image
      4.5.3 Gremlin asynchronous tasks
      1. Create a task
      • The data analysis module currently supports two Gremlin operations, Gremlin query and Gremlin task; if the user switches to the Gremlin task, after clicking execute, an asynchronous task will be created in the asynchronous task center;
      1. Task submission
      • After the task is submitted successfully, the graph area returns the submission result and task ID
      1. Mission details
      • Provide [View] entry, you can jump to the task details to view the specific execution of the current task After jumping to the task center, the currently executing task line will be displayed directly
      image

      Click to view the entry to jump to the task management list, as follows:

      image
      1. View the results
      • The results are displayed in the form of json
      4.5.4 OLAP algorithm tasks

      There is no visual OLAP algorithm execution on Hubble. You can call the RESTful API to perform OLAP algorithm tasks, find the corresponding tasks by ID in the task management, and view the progress and results.

      4.5.5 Delete metadata, rebuild index
      1. Create a task
      • In the metadata modeling module, when deleting metadata, an asynchronous task for deleting metadata can be created
      image
      • When editing an existing vertex/edge type operation, when adding an index, an asynchronous task of creating an index can be created
      image
      1. Task details
      • After confirming/saving, you can jump to the task center to view the details of the current task
      image

      4 - HugeGraph-Client Quick Start

      1 Overview Of Hugegraph

      HugeGraph-Client sends HTTP request to HugeGraph-Server to obtain and parse the execution result of Server. Currently only the HugeGraph-Client for Java is provided. You can use HugeGraph-Client to write Java code to operate HugeGraph, such as adding, deleting, modifying, and querying schema and graph data, or executing gremlin statements.

      2 What You Need

      • Java 11 (also support Java 8)
      • Maven 3.5+

      3 How To Use

      The basic steps to use HugeGraph-Client are as follows:

      • Build a new Maven project by IDEA or Eclipse
      • Add HugeGraph-Client dependency in pom file;
      • Create an object to invoke the interface of HugeGraph-Client

      See the complete example in the following section for the detail.

      4 Complete Example

      4.1 Build New Maven Project

      Using IDEA or Eclipse to create the project:

      4.2 Add Hugegraph-Client Dependency In POM

      <dependencies>
           <dependency>
               <groupId>org.apache.hugegraph</groupId>
               <artifactId>hugegraph-client</artifactId>
      diff --git a/docs/quickstart/hugegraph-hubble/index.html b/docs/quickstart/hugegraph-hubble/index.html
      index 048b13a0b..f457184be 100644
      --- a/docs/quickstart/hugegraph-hubble/index.html
      +++ b/docs/quickstart/hugegraph-hubble/index.html
      @@ -1,17 +1,17 @@
       HugeGraph-Hubble Quick Start | HugeGraph
      +HugeGraph is an analysis-oriented graph database system that supports batch operations, which fully supports Apache …">
       

      HugeGraph-Hubble Quick Start

      1 HugeGraph-Hubble Overview

      HugeGraph is an analysis-oriented graph database system that supports batch operations, which fully supports Apache TinkerPop3 framework and Gremlin graph query language. It provides a complete tool chain ecology such as export, backup, and recovery, and effectively solve the storage, query and correlation analysis needs of massive graph data. HugeGraph is widely used in the fields of risk control, insurance claims, recommendation search, public security crime crackdown, knowledge graph, network security, IT operation and maintenance of bank securities companies, and is committed to allowing more industries, organizations and users to enjoy a wider range of data comprehensive value.

      HugeGraph-Hubble is HugeGraph’s one-stop visual analysis platform. The platform covers the whole process from data modeling, to efficient data import, to real-time and offline analysis of data, and unified management of graphs, realizing the whole process wizard of graph application. It is designed to improve the user’s use fluency, lower the user’s use threshold, and provide a more efficient and easy-to-use user experience.

      The platform mainly includes the following modules:

      Graph Management

      The graph management module realizes the unified management of multiple graphs and graph access, editing, deletion, and query by creating graph and connecting the platform and graph data.

      Metadata Modeling

      The metadata modeling module realizes the construction and management of graph models by creating attribute libraries, vertex types, edge types, and index types. The platform provides two modes, list mode and graph mode, which can display the metadata model in real time, which is more intuitive. At the same time, it also provides a metadata reuse function across graphs, which saves the tedious and repetitive creation process of the same metadata, greatly improves modeling efficiency and enhances ease of use.

      Data Import

      Data import is to convert the user’s business data into the vertices and edges of the graph and insert it into the graph database. The platform provides a wizard-style visual import module. By creating import tasks, the management of import tasks and the parallel operation of multiple import tasks are realized. Improve import performance. After entering the import task, you only need to follow the platform step prompts, upload files as needed, and fill in the content to easily implement the import process of graph data. At the same time, it supports breakpoint resuming, error retry mechanism, etc., which reduces import costs and improves efficiency.

      Graph Analysis

      By inputting the graph traversal language Gremlin, high-performance general analysis of graph data can be realized, and functions such as customized multidimensional path query of vertices can be provided, and three kinds of graph result display methods are provided, including: graph form, table form, Json form, and multidimensional display. The data form meets the needs of various scenarios used by users. It provides functions such as running records and collection of common statements, realizing the traceability of graph operations, and the reuse and sharing of query input, which is fast and efficient. It supports the export of graph data, and the export format is Json format.

      Task Management

      For Gremlin tasks that need to traverse the whole graph, index creation and reconstruction and other time-consuming asynchronous tasks, the platform provides corresponding task management functions to achieve unified management and result viewing of asynchronous tasks.

      2 Deploy

      There are three ways to deplot hugegraph-hubble

      • Use Docker (recommended)
      • Download the Toolchain binary package
      • Source code compilation

      Special Note: If you are starting hubble with Docker, and hubble and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to localhost/127.0.0.1. This will refer to the hubble container internally rather than the host machine, resulting in a connection failure to the server. If hubble and server is in the same docker network, you can use the container_name as the hostname, and 8080 as the port. Or you can use the ip of the host as the hostname, and the port is configured by the host for the server.

      We can use docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble to quick start hubble.

      Alternatively, you can use Docker Compose to start hubble. Additionally, if hubble and the graph are in the same Docker network, you can access the graph using the container name of the graph, eliminating the need for the host machine’s IP address.

      Use docker-compose up -ddocker-compose.yml is following:

      version: '3'
      + Print entire section

      HugeGraph-Hubble Quick Start

      1 HugeGraph-Hubble Overview

      HugeGraph is an analysis-oriented graph database system that supports batch operations, which fully supports Apache TinkerPop3 framework and Gremlin graph query language. It provides a complete tool chain ecology such as export, backup, and recovery, and effectively solve the storage, query and correlation analysis needs of massive graph data. HugeGraph is widely used in the fields of risk control, insurance claims, recommendation search, public security crime crackdown, knowledge graph, network security, IT operation and maintenance of bank securities companies, and is committed to allowing more industries, organizations and users to enjoy a wider range of data comprehensive value.

      HugeGraph-Hubble is HugeGraph’s one-stop visual analysis platform. The platform covers the whole process from data modeling, to efficient data import, to real-time and offline analysis of data, and unified management of graphs, realizing the whole process wizard of graph application. It is designed to improve the user’s use fluency, lower the user’s use threshold, and provide a more efficient and easy-to-use user experience.

      The platform mainly includes the following modules:

      Graph Management

      The graph management module realizes the unified management of multiple graphs and graph access, editing, deletion, and query by creating graph and connecting the platform and graph data.

      Metadata Modeling

      The metadata modeling module realizes the construction and management of graph models by creating attribute libraries, vertex types, edge types, and index types. The platform provides two modes, list mode and graph mode, which can display the metadata model in real time, which is more intuitive. At the same time, it also provides a metadata reuse function across graphs, which saves the tedious and repetitive creation process of the same metadata, greatly improves modeling efficiency and enhances ease of use.

      Data Import

      Data import is to convert the user’s business data into the vertices and edges of the graph and insert it into the graph database. The platform provides a wizard-style visual import module. By creating import tasks, the management of import tasks and the parallel operation of multiple import tasks are realized. Improve import performance. After entering the import task, you only need to follow the platform step prompts, upload files as needed, and fill in the content to easily implement the import process of graph data. At the same time, it supports breakpoint resuming, error retry mechanism, etc., which reduces import costs and improves efficiency.

      Graph Analysis

      By inputting the graph traversal language Gremlin, high-performance general analysis of graph data can be realized, and functions such as customized multidimensional path query of vertices can be provided, and three kinds of graph result display methods are provided, including: graph form, table form, Json form, and multidimensional display. The data form meets the needs of various scenarios used by users. It provides functions such as running records and collection of common statements, realizing the traceability of graph operations, and the reuse and sharing of query input, which is fast and efficient. It supports the export of graph data, and the export format is Json format.

      Task Management

      For Gremlin tasks that need to traverse the whole graph, index creation and reconstruction and other time-consuming asynchronous tasks, the platform provides corresponding task management functions to achieve unified management and result viewing of asynchronous tasks.

      2 Deploy

      There are three ways to deplot hugegraph-hubble

      • Use Docker (recommended)
      • Download the Toolchain binary package
      • Source code compilation

      Special Note: If you are starting hubble with Docker, and hubble and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to localhost/127.0.0.1. This will refer to the hubble container internally rather than the host machine, resulting in a connection failure to the server.

      If hubble and server is in the same docker network, we recommend using the container_name (in our example, it is graph) as the hostname, and 8080 as the port. Or you can use the host IP as the hostname, and the port is configured by the host for the server.

      We can use docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble to quick start hubble.

      Alternatively, you can use Docker Compose to start hubble. Additionally, if hubble and the graph are in the same Docker network, you can access the graph using the container name of the graph, eliminating the need for the host machine’s IP address.

      Use docker-compose up -ddocker-compose.yml is following:

      version: '3'
       services:
         server:
           image: hugegraph/hugegraph
           container_name: graph
           ports:
      -      - 18080:8080
      +      - 8080:8080
       
         hubble:
           image: hugegraph/hubble
      @@ -42,7 +42,7 @@
       mvn -e compile package -Dmaven.javadoc.skip=true -Dmaven.test.skip=true -ntp
       cd apache-hugegraph-hubble-incubating*
       

      Run hubble

      bin/start-hubble.sh -d
      -

      3 Platform Workflow

      The module usage process of the platform is as follows:

      image

      4 Platform Instructions

      4.1 Graph Management

      4.1.1 Graph creation

      Under the graph management module, click [Create graph], and realize the connection of multiple graphs by filling in the graph ID, graph name, host name, port number, username, and password information.

      image

      Create graph by filling in the content as follows::

      image
      4.1.2 Graph Access

      Realize the information access of the graph space. After entering, you can perform operations such as multidimensional query analysis, metadata management, data import, and algorithm analysis of the graph.

      image
      4.1.3 Graph management
      1. Users can achieve unified management of graphs through overview, search, and information editing and deletion of single graphs.
      2. Search range: You can search for the graph name and ID.
      image

      4.2 Metadata Modeling (list + graph mode)

      4.2.1 Module entry

      Left navigation:

      image
      4.2.2 Property type
      4.2.2.1 Create type
      1. Fill in or select the attribute name, data type, and cardinality to complete the creation of the attribute.
      2. Created attributes can be used as attributes of vertex type and edge type.

      List mode:

      image

      Graph mode:

      image
      4.2.2.2 Reuse
      1. The platform provides the [Reuse] function, which can directly reuse the metadata of other graphs.
      2. Select the graph ID that needs to be reused, and continue to select the attributes that need to be reused. After that, the platform will check whether there is a conflict. After passing, the metadata can be reused.

      Select reuse items:

      image

      Check reuse items:

      image
      4.2.2.3 Management
      1. You can delete a single item or delete it in batches in the attribute list.
      4.2.3 Vertex type
      4.2.3.1 Create type
      1. Fill in or select the vertex type name, ID strategy, association attribute, primary key attribute, vertex style, content displayed below the vertex in the query result, and index information: including whether to create a type index, and the specific content of the attribute index, complete the vertex Type creation.

      List mode:

      image

      Graph mode:

      image
      4.2.3.2 Reuse
      1. The multiplexing of vertex types will reuse the attributes and attribute indexes associated with this type together.
      2. The reuse method is similar to the property reuse, see 3.2.2.2.
      4.2.3.3 Administration
      1. Editing operations are available. The vertex style, association type, vertex display content, and attribute index can be edited, and the rest cannot be edited.

      2. You can delete a single item or delete it in batches.

      image
      4.2.4 Edge Types
      4.2.4.1 Create
      1. Fill in or select the edge type name, start point type, end point type, associated attributes, whether to allow multiple connections, edge style, content displayed below the edge in the query result, and index information: including whether to create a type index, and attribute index The specific content, complete the creation of the edge type.

      List mode:

      image

      Graph mode:

      image
      4.2.4.2 Reuse
      1. The reuse of the edge type will reuse the start point type, end point type, associated attribute and attribute index of this type.
      2. The reuse method is similar to the property reuse, see 3.2.2.2.
      4.2.4.3 Administration
      1. Editing operations are available. Edge styles, associated attributes, edge display content, and attribute indexes can be edited, and the rest cannot be edited, the same as the vertex type.
      2. You can delete a single item or delete it in batches.
      4.2.5 Index Types

      Displays vertex and edge indices for vertex types and edge types.

      4.3 Data Import

      The usage process of data import is as follows:

      image
      4.3.1 Module entrance

      Left navigation:

      image
      4.3.2 Create task
      1. Fill in the task name and remarks (optional) to create an import task.
      2. Multiple import tasks can be created and imported in parallel.
      image
      4.3.3 Uploading files
      1. Upload the file that needs to be composed. The currently supported format is CSV, which will be updated continuously in the future.
      2. Multiple files can be uploaded at the same time.
      image
      4.3.4 Setting up data mapping
      1. Set up data mapping for uploaded files, including file settings and type settings

      2. File settings: Check or fill in whether to include the header, separator, encoding format and other settings of the file itself, all set the default values, no need to fill in manually

      3. Type setting:

        1. Vertex map and edge map:

          【Vertex Type】: Select the vertex type, and upload the column data in the file for its ID mapping;

          【Edge Type】: Select the edge type and map the column data of the uploaded file to the ID column of its start point type and end point type;

        2. Mapping settings: upload the column data in the file for the attribute mapping of the selected vertex type. Here, if the attribute name is the same as the header name of the file, the mapping attribute can be automatically matched, and there is no need to manually fill in the selection.

        3. After completing the setting, the setting list will be displayed before proceeding to the next step. It supports the operations of adding, editing and deleting mappings.

      Fill in the settings map:

      image

      Mapping list:

      image
      4.3.5 Import data

      Before importing, you need to fill in the import setting parameters. After filling in, you can start importing data into the gallery.

      1. Import settings
      • The import setting parameter items are as shown in the figure below, all set the default value, no need to fill in manually
      image
      1. Import details
      • Click Start Import to start the file import task
      • The import details provide the mapping type, import speed, import progress, time-consuming and the specific status of the current task set for each uploaded file, and can pause, resume, stop and other operations for each task
      • If the import fails, you can view the specific reason
      image

      4.4 Data Analysis

      4.4.1 Module entry

      Left navigation:

      image
      4.4.2 Multi-image switching

      By switching the entrance on the left, flexibly switch the operation space of multiple graphs

      image
      4.4.3 Graph Analysis and Processing

      HugeGraph supports Gremlin, a graph traversal query language of Apache TinkerPop3. Gremlin is a general graph database query language. By entering Gremlin statements and clicking execute, you can perform query and analysis operations on graph data, and create and delete vertices/edges. , vertex/edge attribute modification, etc.

      After Gremlin query, below is the graph result display area, which provides 3 kinds of graph result display modes: [Graph Mode], [Table Mode], [Json Mode].

      Support zoom, center, full screen, export and other operations.

      【Picture Mode】

      image

      【Table mode】

      image

      【Json mode】

      image
      4.4.4 Data Details

      Click the vertex/edge entity to view the data details of the vertex/edge, including: vertex/edge type, vertex ID, attribute and corresponding value, expand the information display dimension of the graph, and improve the usability.

      4.4.5 Multidimensional Path Query of Graph Results

      In addition to the global query, in-depth customized query and hidden operations can be performed for the vertices in the query result to realize customized mining of graph results.

      Right-click a vertex, and the menu entry of the vertex appears, which can be displayed, inquired, hidden, etc.

      • Expand: Click to display the vertices associated with the selected point.
      • Query: By selecting the edge type and edge direction associated with the selected point, and then selecting its attributes and corresponding filtering rules under this condition, a customized path display can be realized.
      • Hide: When clicked, hides the selected point and its associated edges.

      Double-clicking a vertex also displays the vertex associated with the selected point.

      image
      4.4.6 Add vertex/edge
      4.4.6.1 Added vertex

      In the graph area, two entries can be used to dynamically add vertices, as follows:

      1. Click on the graph area panel, the Add Vertex entry appears
      2. Click the first icon in the action bar in the upper right corner

      Complete the addition of vertices by selecting or filling in the vertex type, ID value, and attribute information.

      The entry is as follows:

      image

      Add the vertex content as follows:

      image
      4.4.6.2 Add edge

      Right-click a vertex in the graph result to add the outgoing or incoming edge of that point.

      4.4.7 Execute the query of records and favorites
      1. Record each query record at the bottom of the graph area, including: query time, execution type, content, status, time-consuming, as well as [collection] and [load] operations, to achieve a comprehensive record of graph execution, with traces to follow, and Can quickly load and reuse execution content
      2. Provides the function of collecting sentences, which can be used to collect frequently used sentences, which is convenient for fast calling of high-frequency sentences.
      image

      4.5 Task Management

      4.5.1 Module entry

      Left navigation:

      image
      4.5.2 Task Management
      1. Provide unified management and result viewing of asynchronous tasks. There are 4 types of asynchronous tasks, namely:
      • gremlin: Gremlin tasks
      • algorithm: OLAP algorithm task
      • remove_schema: remove metadata
      • rebuild_index: rebuild the index
      1. The list displays the asynchronous task information of the current graph, including: task ID, task name, task type, creation time, time-consuming, status, operation, and realizes the management of asynchronous tasks.
      2. Support filtering by task type and status
      3. Support searching for task ID and task name
      4. Asynchronous tasks can be deleted or deleted in batches
      image
      4.5.3 Gremlin asynchronous tasks
      1. Create a task
      • The data analysis module currently supports two Gremlin operations, Gremlin query and Gremlin task; if the user switches to the Gremlin task, after clicking execute, an asynchronous task will be created in the asynchronous task center;
      1. Task submission
      • After the task is submitted successfully, the graph area returns the submission result and task ID
      1. Mission details
      • Provide [View] entry, you can jump to the task details to view the specific execution of the current task After jumping to the task center, the currently executing task line will be displayed directly
      image

      Click to view the entry to jump to the task management list, as follows:

      image
      1. View the results
      • The results are displayed in the form of json
      4.5.4 OLAP algorithm tasks

      There is no visual OLAP algorithm execution on Hubble. You can call the RESTful API to perform OLAP algorithm tasks, find the corresponding tasks by ID in the task management, and view the progress and results.

      4.5.5 Delete metadata, rebuild index
      1. Create a task
      • In the metadata modeling module, when deleting metadata, an asynchronous task for deleting metadata can be created
      image
      • When editing an existing vertex/edge type operation, when adding an index, an asynchronous task of creating an index can be created
      image
      1. Task details
      • After confirming/saving, you can jump to the task center to view the details of the current task
      image

      +

      3 Platform Workflow

      The module usage process of the platform is as follows:

      image

      4 Platform Instructions

      4.1 Graph Management

      4.1.1 Graph creation

      Under the graph management module, click [Create graph], and realize the connection of multiple graphs by filling in the graph ID, graph name, host name, port number, username, and password information.

      image

      Create graph by filling in the content as follows:

      image

      Special Note: If you are starting hubble with Docker, and hubble and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to localhost/127.0.0.1. If hubble and server is in the same docker network, we recommend using the container_name (in our example, it is graph) as the hostname, and 8080 as the port. Or you can use the host IP as the hostname, and the port is configured by the host for the server.

      4.1.2 Graph Access

      Realize the information access of the graph space. After entering, you can perform operations such as multidimensional query analysis, metadata management, data import, and algorithm analysis of the graph.

      image
      4.1.3 Graph management
      1. Users can achieve unified management of graphs through overview, search, and information editing and deletion of single graphs.
      2. Search range: You can search for the graph name and ID.
      image

      4.2 Metadata Modeling (list + graph mode)

      4.2.1 Module entry

      Left navigation:

      image
      4.2.2 Property type
      4.2.2.1 Create type
      1. Fill in or select the attribute name, data type, and cardinality to complete the creation of the attribute.
      2. Created attributes can be used as attributes of vertex type and edge type.

      List mode:

      image

      Graph mode:

      image
      4.2.2.2 Reuse
      1. The platform provides the [Reuse] function, which can directly reuse the metadata of other graphs.
      2. Select the graph ID that needs to be reused, and continue to select the attributes that need to be reused. After that, the platform will check whether there is a conflict. After passing, the metadata can be reused.

      Select reuse items:

      image

      Check reuse items:

      image
      4.2.2.3 Management
      1. You can delete a single item or delete it in batches in the attribute list.
      4.2.3 Vertex type
      4.2.3.1 Create type
      1. Fill in or select the vertex type name, ID strategy, association attribute, primary key attribute, vertex style, content displayed below the vertex in the query result, and index information: including whether to create a type index, and the specific content of the attribute index, complete the vertex Type creation.

      List mode:

      image

      Graph mode:

      image
      4.2.3.2 Reuse
      1. The multiplexing of vertex types will reuse the attributes and attribute indexes associated with this type together.
      2. The reuse method is similar to the property reuse, see 3.2.2.2.
      4.2.3.3 Administration
      1. Editing operations are available. The vertex style, association type, vertex display content, and attribute index can be edited, and the rest cannot be edited.

      2. You can delete a single item or delete it in batches.

      image
      4.2.4 Edge Types
      4.2.4.1 Create
      1. Fill in or select the edge type name, start point type, end point type, associated attributes, whether to allow multiple connections, edge style, content displayed below the edge in the query result, and index information: including whether to create a type index, and attribute index The specific content, complete the creation of the edge type.

      List mode:

      image

      Graph mode:

      image
      4.2.4.2 Reuse
      1. The reuse of the edge type will reuse the start point type, end point type, associated attribute and attribute index of this type.
      2. The reuse method is similar to the property reuse, see 3.2.2.2.
      4.2.4.3 Administration
      1. Editing operations are available. Edge styles, associated attributes, edge display content, and attribute indexes can be edited, and the rest cannot be edited, the same as the vertex type.
      2. You can delete a single item or delete it in batches.
      4.2.5 Index Types

      Displays vertex and edge indices for vertex types and edge types.

      4.3 Data Import

      Note:currently, we recommend to use hugegraph-loader to import data formally. The built-in import of hubble is used for testing and getting started.

      The usage process of data import is as follows:

      image
      4.3.1 Module entrance

      Left navigation:

      image
      4.3.2 Create task
      1. Fill in the task name and remarks (optional) to create an import task.
      2. Multiple import tasks can be created and imported in parallel.
      image
      4.3.3 Uploading files
      1. Upload the file that needs to be composed. The currently supported format is CSV, which will be updated continuously in the future.
      2. Multiple files can be uploaded at the same time.
      image
      4.3.4 Setting up data mapping
      1. Set up data mapping for uploaded files, including file settings and type settings

      2. File settings: Check or fill in whether to include the header, separator, encoding format and other settings of the file itself, all set the default values, no need to fill in manually

      3. Type setting:

        1. Vertex map and edge map:

          【Vertex Type】: Select the vertex type, and upload the column data in the file for its ID mapping;

          【Edge Type】: Select the edge type and map the column data of the uploaded file to the ID column of its start point type and end point type;

        2. Mapping settings: upload the column data in the file for the attribute mapping of the selected vertex type. Here, if the attribute name is the same as the header name of the file, the mapping attribute can be automatically matched, and there is no need to manually fill in the selection.

        3. After completing the setting, the setting list will be displayed before proceeding to the next step. It supports the operations of adding, editing and deleting mappings.

      Fill in the settings map:

      image

      Mapping list:

      image
      4.3.5 Import data

      Before importing, you need to fill in the import setting parameters. After filling in, you can start importing data into the gallery.

      1. Import settings
      • The import setting parameter items are as shown in the figure below, all set the default value, no need to fill in manually
      image
      1. Import details
      • Click Start Import to start the file import task
      • The import details provide the mapping type, import speed, import progress, time-consuming and the specific status of the current task set for each uploaded file, and can pause, resume, stop and other operations for each task
      • If the import fails, you can view the specific reason
      image

      4.4 Data Analysis

      4.4.1 Module entry

      Left navigation:

      image
      4.4.2 Multi-image switching

      By switching the entrance on the left, flexibly switch the operation space of multiple graphs

      image
      4.4.3 Graph Analysis and Processing

      HugeGraph supports Gremlin, a graph traversal query language of Apache TinkerPop3. Gremlin is a general graph database query language. By entering Gremlin statements and clicking execute, you can perform query and analysis operations on graph data, and create and delete vertices/edges. , vertex/edge attribute modification, etc.

      After Gremlin query, below is the graph result display area, which provides 3 kinds of graph result display modes: [Graph Mode], [Table Mode], [Json Mode].

      Support zoom, center, full screen, export and other operations.

      【Picture Mode】

      image

      【Table mode】

      image

      【Json mode】

      image
      4.4.4 Data Details

      Click the vertex/edge entity to view the data details of the vertex/edge, including: vertex/edge type, vertex ID, attribute and corresponding value, expand the information display dimension of the graph, and improve the usability.

      4.4.5 Multidimensional Path Query of Graph Results

      In addition to the global query, in-depth customized query and hidden operations can be performed for the vertices in the query result to realize customized mining of graph results.

      Right-click a vertex, and the menu entry of the vertex appears, which can be displayed, inquired, hidden, etc.

      • Expand: Click to display the vertices associated with the selected point.
      • Query: By selecting the edge type and edge direction associated with the selected point, and then selecting its attributes and corresponding filtering rules under this condition, a customized path display can be realized.
      • Hide: When clicked, hides the selected point and its associated edges.

      Double-clicking a vertex also displays the vertex associated with the selected point.

      image
      4.4.6 Add vertex/edge
      4.4.6.1 Added vertex

      In the graph area, two entries can be used to dynamically add vertices, as follows:

      1. Click on the graph area panel, the Add Vertex entry appears
      2. Click the first icon in the action bar in the upper right corner

      Complete the addition of vertices by selecting or filling in the vertex type, ID value, and attribute information.

      The entry is as follows:

      image

      Add the vertex content as follows:

      image
      4.4.6.2 Add edge

      Right-click a vertex in the graph result to add the outgoing or incoming edge of that point.

      4.4.7 Execute the query of records and favorites
      1. Record each query record at the bottom of the graph area, including: query time, execution type, content, status, time-consuming, as well as [collection] and [load] operations, to achieve a comprehensive record of graph execution, with traces to follow, and Can quickly load and reuse execution content
      2. Provides the function of collecting sentences, which can be used to collect frequently used sentences, which is convenient for fast calling of high-frequency sentences.
      image

      4.5 Task Management

      4.5.1 Module entry

      Left navigation:

      image
      4.5.2 Task Management
      1. Provide unified management and result viewing of asynchronous tasks. There are 4 types of asynchronous tasks, namely:
      • gremlin: Gremlin tasks
      • algorithm: OLAP algorithm task
      • remove_schema: remove metadata
      • rebuild_index: rebuild the index
      1. The list displays the asynchronous task information of the current graph, including: task ID, task name, task type, creation time, time-consuming, status, operation, and realizes the management of asynchronous tasks.
      2. Support filtering by task type and status
      3. Support searching for task ID and task name
      4. Asynchronous tasks can be deleted or deleted in batches
      image
      4.5.3 Gremlin asynchronous tasks
      1. Create a task
      • The data analysis module currently supports two Gremlin operations, Gremlin query and Gremlin task; if the user switches to the Gremlin task, after clicking execute, an asynchronous task will be created in the asynchronous task center;
      1. Task submission
      • After the task is submitted successfully, the graph area returns the submission result and task ID
      1. Mission details
      • Provide [View] entry, you can jump to the task details to view the specific execution of the current task After jumping to the task center, the currently executing task line will be displayed directly
      image

      Click to view the entry to jump to the task management list, as follows:

      image
      1. View the results
      • The results are displayed in the form of json
      4.5.4 OLAP algorithm tasks

      There is no visual OLAP algorithm execution on Hubble. You can call the RESTful API to perform OLAP algorithm tasks, find the corresponding tasks by ID in the task management, and view the progress and results.

      4.5.5 Delete metadata, rebuild index
      1. Create a task
      • In the metadata modeling module, when deleting metadata, an asynchronous task for deleting metadata can be created
      image
      • When editing an existing vertex/edge type operation, when adding an index, an asynchronous task of creating an index can be created
      image
      1. Task details
      • After confirming/saving, you can jump to the task center to view the details of the current task
      image

    diff --git a/docs/quickstart/hugegraph-server/index.html b/docs/quickstart/hugegraph-server/index.html index f2fdcf60b..a21a82b45 100644 --- a/docs/quickstart/hugegraph-server/index.html +++ b/docs/quickstart/hugegraph-server/index.html @@ -2,9 +2,9 @@ HugeGraph-Server is the core part of the HugeGraph Project, contains submodules such as Core, Backend, API. The Core …">

    HugeGraph-Server Quick Start

    1 HugeGraph-Server Overview

    HugeGraph-Server is the core part of the HugeGraph Project, contains submodules such as Core, Backend, API.

    The Core Module is an implementation of the Tinkerpop interface; The Backend module is used to save the graph data to the data store, currently supported backends include: Memory, Cassandra, ScyllaDB, RocksDB; The API Module provides HTTP Server, which converts Client’s HTTP request into a call to Core Module.

    There will be two spellings HugeGraph-Server and HugeGraphServer in the document, and other modules are similar. There is no big difference in the meaning of these two ways of writing, which can be distinguished as follows: HugeGraph-Server represents the code of server-related components, HugeGraphServer represents the service process.

    2 Dependency for Building/Running

    2.1 Install Java 11 (JDK 11)

    Consider use Java 11 to run HugeGraph-Server (also compatible with Java 8 now), and configure by yourself.

    Be sure to execute the java -version command to check the jdk version before reading

    3 Deploy

    There are four ways to deploy HugeGraph-Server components:

    • Method 1: Use Docker container (recommended)
    • Method 2: Download the binary tarball
    • Method 3: Source code compilation
    • Method 4: One-click deployment

    You can refer to Docker deployment guide.

    We can use docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph to quickly start an inner HugeGraph server with RocksDB in background.

    Optional:

    1. use docker exec -it graph bash to enter the container to do some operations.
    2. use docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph to start with a built-in example graph.

    Also, we can use docker-compose to deploy, with docker-compose up -d. Here is an example docker-compose.yml:

    version: '3'
    + Print entire section

    HugeGraph-Server Quick Start

    1 HugeGraph-Server Overview

    HugeGraph-Server is the core part of the HugeGraph Project, contains submodules such as Core, Backend, API.

    The Core Module is an implementation of the Tinkerpop interface; The Backend module is used to save the graph data to the data store, currently supported backends include: Memory, Cassandra, ScyllaDB, RocksDB; The API Module provides HTTP Server, which converts Client’s HTTP request into a call to Core Module.

    There will be two spellings HugeGraph-Server and HugeGraphServer in the document, and other modules are similar. There is no big difference in the meaning of these two ways of writing, which can be distinguished as follows: HugeGraph-Server represents the code of server-related components, HugeGraphServer represents the service process.

    2 Dependency for Building/Running

    2.1 Install Java 11 (JDK 11)

    Consider use Java 11 to run HugeGraph-Server (also compatible with Java 8 now), and configure by yourself.

    Be sure to execute the java -version command to check the jdk version before reading

    3 Deploy

    There are four ways to deploy HugeGraph-Server components:

    • Method 1: Use Docker container (recommended)
    • Method 2: Download the binary tarball
    • Method 3: Source code compilation
    • Method 4: One-click deployment

    You can refer to Docker deployment guide.

    We can use docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph to quickly start an inner HugeGraph server with RocksDB in background.

    Optional:

    1. use docker exec -it graph bash to enter the container to do some operations.
    2. use docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph to start with a built-in example graph. We can use RESTful API to verify the result. The detailed step can refer to 5.1.1

    Also, if we want to manage the other Hugegraph related instances in one file, we can use docker-compose to deploy, with the command docker-compose up -d (you can config only server). Here is an example docker-compose.yml:

    version: '3'
     services:
       graph:
         image: hugegraph/hugegraph
    -    #environment:
    +    # environment:
         #  - PRELOAD=true
    +    # PRELOAD is a option to preload a build-in sample graph when initializing.
         ports:
    -      - 18080:8080
    +      - 8080:8080
     

    3.2 Download the binary tar tarball

    You could download the binary tarball from the download page of ASF site like this:

    # use the latest version, here is 1.0.0 for example
     wget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz
     tar zxf *hugegraph*.tar.gz
    @@ -68,7 +69,7 @@
     cd *hugegraph*/*tool* 
     

    note: ${version} is the version, The latest version can refer to Download Page, or click the link to download directly from the Download page

    The general entry script for HugeGraph-Tools is bin/hugegraph, Users can use the help command to view its usage, here only the commands for one-click deployment are introduced.

    bin/hugegraph deploy -v {hugegraph-version} -p {install-path} [-u {download-path-prefix}]
     

    {hugegraph-version} indicates the version of HugeGraphServer and HugeGraphStudio to be deployed, users can view the conf/version-mapping.yaml file for version information, {install-path} specify the installation directory of HugeGraphServer and HugeGraphStudio, {download-path-prefix} optional, specify the download address of HugeGraphServer and HugeGraphStudio tarball, use default download URL if not provided, for example, to start HugeGraph-Server and HugeGraphStudio version 0.6, write the above command as bin/hugegraph deploy -v 0.6 -p services.

    4 Config

    If you need to quickly start HugeGraph just for testing, then you only need to modify a few configuration items (see next section). -for detailed configuration introduction, please refer to configuration document and introduction to configuration items

    5 Startup

    5.1 Use Docker to startup

    In 3.1 Use Docker container, we have introduced how to use docker to deploy hugegraph-server. server can also preload an example graph by setting the parameter.

    5.1.1 Create example graph when starting server

    Set the environment variable PRELOAD=true when starting Docker in order to load data during the execution of the startup script.

    1. Use docker run

      Use docker run -itd --name=graph -p 18080:8080 -e PRELOAD=true hugegraph/hugegraph:latest

    2. Use docker-compose

      Create docker-compose.yml as following

      version: '3'
      +for detailed configuration introduction, please refer to configuration document and introduction to configuration items

      5 Startup

      5.1 Use Docker to startup

      In 3.1 Use Docker container, we have introduced how to use docker to deploy hugegraph-server. server can also preload an example graph by setting the parameter.

      5.1.1 Create example graph when starting server

      Set the environment variable PRELOAD=true when starting Docker in order to load data during the execution of the startup script.

      1. Use docker run

        Use docker run -itd --name=graph -p 8080:8080 -e PRELOAD=true hugegraph/hugegraph:latest

      2. Use docker-compose

        Create docker-compose.yml as following. We should set the environment variable PRELOAD=true. example.groovy is a predefined script to preload the sample data. If needed, we can mount a new example.groovy to change the preload data.

        version: '3'
           services:
             graph:
               image: hugegraph/hugegraph:latest
        @@ -76,7 +77,7 @@
               environment:
                 - PRELOAD=true
               ports:
        -        - 18080:8080
        +        - 8080:8080
         

        Use docker-compose up -d to start the container

      And use the RESTful API to request HugeGraphServer and get the following result:

      > curl "http://localhost:8080/graphs/hugegraph/graph/vertices" | gunzip
       
       {"vertices":[{"id":"2:lop","label":"software","type":"vertex","properties":{"name":"lop","lang":"java","price":328}},{"id":"1:josh","label":"person","type":"vertex","properties":{"name":"josh","age":32,"city":"Beijing"}},{"id":"1:marko","label":"person","type":"vertex","properties":{"name":"marko","age":29,"city":"Beijing"}},{"id":"1:peter","label":"person","type":"vertex","properties":{"name":"peter","age":35,"city":"Shanghai"}},{"id":"1:vadas","label":"person","type":"vertex","properties":{"name":"vadas","age":27,"city":"Hongkong"}},{"id":"2:ripple","label":"software","type":"vertex","properties":{"name":"ripple","lang":"java","price":199}}]}
      @@ -91,7 +92,7 @@
       serializer=binary
       rocksdb.data_path=.
       rocksdb.wal_path=.
      -

      Initialize the database (required only on first startup)

      cd *hugegraph-${version}
      +

      Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)

      cd *hugegraph-${version}
       bin/init-store.sh
       

      Start server

      bin/start-hugegraph.sh
       Starting HugeGraphServer...
      @@ -109,7 +110,7 @@
       
       #cassandra.keyspace.strategy=SimpleStrategy
       #cassandra.keyspace.replication=3
      -

      Initialize the database (required only on first startup)

      cd *hugegraph-${version}
      +

      Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)

      cd *hugegraph-${version}
       bin/init-store.sh
       Initing HugeGraph Store...
       2017-12-01 11:26:51 1424  [main] [INFO ] org.apache.hugegraph.HugeGraph [] - Opening backend store: 'cassandra'
      @@ -144,7 +145,7 @@
       
       #cassandra.keyspace.strategy=SimpleStrategy
       #cassandra.keyspace.replication=3
      -

      Since the scylladb database itself is an “optimized version” based on cassandra, if the user does not have scylladb installed, they can also use cassandra as the backend storage directly. They only need to change the backend and serializer to scylladb, and the host and post point to the seeds and port of the cassandra cluster. Yes, but it is not recommended to do so, it will not take advantage of scylladb itself.

      Initialize the database (required only on first startup)

      cd *hugegraph-${version}
      +

      Since the scylladb database itself is an “optimized version” based on cassandra, if the user does not have scylladb installed, they can also use cassandra as the backend storage directly. They only need to change the backend and serializer to scylladb, and the host and post point to the seeds and port of the cassandra cluster. Yes, but it is not recommended to do so, it will not take advantage of scylladb itself.

      Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)

      cd *hugegraph-${version}
       bin/init-store.sh
       

      Start server

      bin/start-hugegraph.sh
       Starting HugeGraphServer...
      @@ -160,7 +161,7 @@
       #hbase.enable_partition=true
       #hbase.vertex_partitions=10
       #hbase.edge_partitions=30
      -

      Initialize the database (required only on first startup)

      cd *hugegraph-${version}
      +

      Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)

      cd *hugegraph-${version}
       bin/init-store.sh
       

      Start server

      bin/start-hugegraph.sh
       Starting HugeGraphServer...
      @@ -178,7 +179,7 @@
       jdbc.reconnect_max_times=3
       jdbc.reconnect_interval=3
       jdbc.ssl_mode=false
      -

      Initialize the database (required only on first startup)

      cd *hugegraph-${version}
      +

      Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)

      cd *hugegraph-${version}
       bin/init-store.sh
       

      Start server

      bin/start-hugegraph.sh
       Starting HugeGraphServer...
      @@ -246,9 +247,9 @@
               ...
           ]
       }
      -

      For detailed API, please refer to RESTful-API

      7 Stop Server

      $cd *hugegraph-${version}
      +

      For detailed API, please refer to RESTful-API

      You can also visit localhost:8080/swagger-ui/index.html to check the API.

      image

      7 Stop Server

      $cd *hugegraph-${version}
       $bin/stop-hugegraph.sh
      -

      8 Debug Server with IntelliJ IDEA

      Please refer to Setup Server in IDEA


    +

    8 Debug Server with IntelliJ IDEA

    Please refer to Setup Server in IDEA


    diff --git a/docs/quickstart/index.xml b/docs/quickstart/index.xml index eb7b40148..53978da73 100644 --- a/docs/quickstart/index.xml +++ b/docs/quickstart/index.xml @@ -24,17 +24,18 @@ <p>Optional:</p> <ol> <li>use <code>docker exec -it graph bash</code> to enter the container to do some operations.</li> -<li>use <code>docker run -itd --name=graph -p 8080:8080 -e PRELOAD=&quot;true&quot; hugegraph/hugegraph</code> to start with a <strong>built-in</strong> example graph.</li> +<li>use <code>docker run -itd --name=graph -p 8080:8080 -e PRELOAD=&quot;true&quot; hugegraph/hugegraph</code> to start with a <strong>built-in</strong> example graph. We can use <code>RESTful API</code> to verify the result. The detailed step can refer to <a href="http://127.0.0.1:1313/docs/quickstart/hugegraph-server/#511-create-example-graph-when-starting-server">5.1.1</a></li> </ol> -<p>Also, we can use <code>docker-compose</code> to deploy, with <code>docker-compose up -d</code>. Here is an example <code>docker-compose.yml</code>:</p> +<p>Also, if we want to manage the other Hugegraph related instances in one file, we can use <code>docker-compose</code> to deploy, with the command <code>docker-compose up -d</code> (you can config only <code>server</code>). Here is an example <code>docker-compose.yml</code>:</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#204a87;font-weight:bold">version</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#4e9a06">&#39;3&#39;</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"></span><span style="color:#204a87;font-weight:bold">services</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">graph</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">image</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">hugegraph/hugegraph</span><span style="color:#f8f8f8;text-decoration:underline"> -</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic">#environment:</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic"># environment:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic"># - PRELOAD=true</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic"># PRELOAD is a option to preload a build-in sample graph when initializing.</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">ports</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> -</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">18080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span></code></pre></div><h4 id="32-download-the-binary-tar-tarball">3.2 Download the binary tar tarball</h4> <p>You could download the binary tarball from the download page of ASF site like this:</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#8f5902;font-style:italic"># use the latest version, here is 1.0.0 for example</span> @@ -113,11 +114,11 @@ for detailed configuration introduction, please refer to <a href="/docs/confi <ol> <li> <p>Use <code>docker run</code></p> -<p>Use <code>docker run -itd --name=graph -p 18080:8080 -e PRELOAD=true hugegraph/hugegraph:latest</code></p> +<p>Use <code>docker run -itd --name=graph -p 8080:8080 -e PRELOAD=true hugegraph/hugegraph:latest</code></p> </li> <li> <p>Use <code>docker-compose</code></p> -<p>Create <code>docker-compose.yml</code> as following</p> +<p>Create <code>docker-compose.yml</code> as following. We should set the environment variable <code>PRELOAD=true</code>. <a href="https://github.com/apache/incubator-hugegraph/blob/master/hugegraph-dist/src/assembly/static/scripts/example.groovy"><code>example.groovy</code></a> is a predefined script to preload the sample data. If needed, we can mount a new <code>example.groovy</code> to change the preload data.</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#204a87;font-weight:bold">version</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#4e9a06">&#39;3&#39;</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">services</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">graph</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> @@ -126,7 +127,7 @@ for detailed configuration introduction, please refer to <a href="/docs/confi </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">environment</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#000">PRELOAD=true</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">ports</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> -</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">18080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span></code></pre></div><p>Use <code>docker-compose up -d</code> to start the container</p> </li> </ol> @@ -168,7 +169,7 @@ after the service is stopped artificially, or when the service needs to be start </span></span><span style="display:flex;"><span>serializer=binary </span></span><span style="display:flex;"><span>rocksdb.data_path=. </span></span><span style="display:flex;"><span>rocksdb.wal_path=. -</span></span></code></pre></div><p>Initialize the database (required only on first startup)</p> +</span></span></code></pre></div><p>Initialize the database (required on first startup or a new configuration was manually added under &lsquo;conf/graphs/&rsquo;)</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span>bin/init-store.sh </span></span></code></pre></div><p>Start server</p> @@ -196,7 +197,7 @@ after the service is stopped artificially, or when the service needs to be start </span></span><span style="display:flex;"><span> </span></span><span style="display:flex;"><span>#cassandra.keyspace.strategy=SimpleStrategy </span></span><span style="display:flex;"><span>#cassandra.keyspace.replication=3 -</span></span></code></pre></div><p>Initialize the database (required only on first startup)</p> +</span></span></code></pre></div><p>Initialize the database (required on first startup or a new configuration was manually added under &lsquo;conf/graphs/&rsquo;)</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span>bin/init-store.sh </span></span><span style="display:flex;"><span>Initing HugeGraph Store... @@ -242,7 +243,7 @@ after the service is stopped artificially, or when the service needs to be start </span></span><span style="display:flex;"><span>#cassandra.keyspace.strategy=SimpleStrategy </span></span><span style="display:flex;"><span>#cassandra.keyspace.replication=3 </span></span></code></pre></div><p>Since the scylladb database itself is an &ldquo;optimized version&rdquo; based on cassandra, if the user does not have scylladb installed, they can also use cassandra as the backend storage directly. They only need to change the backend and serializer to scylladb, and the host and post point to the seeds and port of the cassandra cluster. Yes, but it is not recommended to do so, it will not take advantage of scylladb itself.</p> -<p>Initialize the database (required only on first startup)</p> +<p>Initialize the database (required on first startup or a new configuration was manually added under &lsquo;conf/graphs/&rsquo;)</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span>bin/init-store.sh </span></span></code></pre></div><p>Start server</p> @@ -268,7 +269,7 @@ after the service is stopped artificially, or when the service needs to be start </span></span><span style="display:flex;"><span>#hbase.enable_partition=true </span></span><span style="display:flex;"><span>#hbase.vertex_partitions=10 </span></span><span style="display:flex;"><span>#hbase.edge_partitions=30 -</span></span></code></pre></div><p>Initialize the database (required only on first startup)</p> +</span></span></code></pre></div><p>Initialize the database (required on first startup or a new configuration was manually added under &lsquo;conf/graphs/&rsquo;)</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span>bin/init-store.sh </span></span></code></pre></div><p>Start server</p> @@ -300,7 +301,7 @@ after the service is stopped artificially, or when the service needs to be start </span></span><span style="display:flex;"><span>jdbc.reconnect_max_times=3 </span></span><span style="display:flex;"><span>jdbc.reconnect_interval=3 </span></span><span style="display:flex;"><span>jdbc.ssl_mode=false -</span></span></code></pre></div><p>Initialize the database (required only on first startup)</p> +</span></span></code></pre></div><p>Initialize the database (required on first startup or a new configuration was manually added under &lsquo;conf/graphs/&rsquo;)</p> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span>bin/init-store.sh </span></span></code></pre></div><p>Start server</p> @@ -400,7 +401,12 @@ after the service is stopped artificially, or when the service needs to be start </span></span><span style="display:flex;"><span> ... </span></span><span style="display:flex;"><span> ] </span></span><span style="display:flex;"><span>} -</span></span></code></pre></div><p>For detailed API, please refer to <a href="/docs/clients/restful-api">RESTful-API</a></p> +</span></span></code></pre></div><p id="swaggerui-example"></p> +<p>For detailed API, please refer to <a href="/docs/clients/restful-api">RESTful-API</a></p> +<p>You can also visit <code>localhost:8080/swagger-ui/index.html</code> to check the API.</p> +<div style="text-align: center;"> +<img src="/docs/images/images-server/621swaggerui示例.png" alt="image"> +</div> <h3 id="7-stop-server">7 Stop Server</h3> <div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#000">$cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span> </span></span><span style="display:flex;"><span><span style="color:#000">$bin</span>/stop-hugegraph.sh @@ -1462,7 +1468,8 @@ And there is no need to guarantee the order between the two parameters.</p> </ul> <h4 id="21-use-docker-recommended">2.1 Use docker (recommended)</h4> <blockquote> -<p><strong>Special Note</strong>: If you are starting <code>hubble</code> with Docker, and <code>hubble</code> and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to <code>localhost/127.0.0.1</code>. This will refer to the <code>hubble</code> container internally rather than the host machine, resulting in a connection failure to the server. If <code>hubble</code> and <code>server</code> is in the same docker network, you can use the <code>container_name</code> as the hostname, and <code>8080</code> as the port. Or you can use the ip of the host as the hostname, and the port is configured by the host for the server.</p> +<p><strong>Special Note</strong>: If you are starting <code>hubble</code> with Docker, and <code>hubble</code> and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to <code>localhost/127.0.0.1</code>. This will refer to the <code>hubble</code> container internally rather than the host machine, resulting in a connection failure to the server.</p> +<p>If <code>hubble</code> and <code>server</code> is in the same docker network, we <strong>recommend</strong> using the <code>container_name</code> (in our example, it is <code>graph</code>) as the hostname, and <code>8080</code> as the port. Or you can use the <strong>host IP</strong> as the hostname, and the port is configured by the host for the server.</p> </blockquote> <p>We can use <code>docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble</code> to quick start <a href="https://hub.docker.com/r/hugegraph/hubble">hubble</a>.</p> <p>Alternatively, you can use Docker Compose to start <code>hubble</code>. Additionally, if <code>hubble</code> and the graph are in the same Docker network, you can access the graph using the container name of the graph, eliminating the need for the host machine&rsquo;s IP address.</p> @@ -1473,7 +1480,7 @@ And there is no need to guarantee the order between the two parameters.</p> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">image</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">hugegraph/hugegraph</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">container_name</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">graph</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">ports</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> -</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">18080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> +</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">hubble</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">image</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">hugegraph/hubble</span><span style="color:#f8f8f8;text-decoration:underline"> @@ -1528,10 +1535,13 @@ And there is no need to guarantee the order between the two parameters.</p> <div style="text-align: center;"> <img src="/docs/images/images-hubble/311图创建.png" alt="image"> </div> -<p>Create graph by filling in the content as follows::</p> +<p>Create graph by filling in the content as follows:</p> <center> <img src="/docs/images/images-hubble/311图创建2.png" alt="image"> </center> +<blockquote> +<p><strong>Special Note</strong>: If you are starting <code>hubble</code> with Docker, and <code>hubble</code> and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to <code>localhost/127.0.0.1</code>. If <code>hubble</code> and <code>server</code> is in the same docker network, we <strong>recommend</strong> using the <code>container_name</code> (in our example, it is <code>graph</code>) as the hostname, and <code>8080</code> as the port. Or you can use the <strong>host IP</strong> as the hostname, and the port is configured by the host for the server.</p> +</blockquote> <h5 id="412graph-access">4.1.2 Graph Access</h5> <p>Realize the information access of the graph space. After entering, you can perform operations such as multidimensional query analysis, metadata management, data import, and algorithm analysis of the graph.</p> <center> @@ -1621,7 +1631,7 @@ And there is no need to guarantee the order between the two parameters.</p> <center> <img src="/docs/images/images-hubble/3241边创建.png" alt="image"> </center> -<p>Graph mode:</p> +<p>Graph mode:</p> <center> <img src="/docs/images/images-hubble/3241边创建2.png" alt="image"> </center> @@ -1638,6 +1648,9 @@ And there is no need to guarantee the order between the two parameters.</p> <h5 id="425-index-types">4.2.5 Index Types</h5> <p>Displays vertex and edge indices for vertex types and edge types.</p> <h4 id="43-data-import">4.3 Data Import</h4> +<blockquote> +<p><strong>Note</strong>:currently, we recommend to use <a href="/en/docs/quickstart/hugegraph-loader">hugegraph-loader</a> to import data formally. The built-in import of <code>hubble</code> is used for <strong>testing</strong> and <strong>getting started</strong>.</p> +</blockquote> <p>The usage process of data import is as follows:</p> <center> <img src="/docs/images/images-hubble/33导入流程.png" alt="image"> diff --git a/en/sitemap.xml b/en/sitemap.xml index 222c01562..2d7a3448f 100644 --- a/en/sitemap.xml +++ b/en/sitemap.xml @@ -1 +1 @@ -/docs/guides/architectural/2023-06-25T21:06:07+08:00/docs/config/config-guide/2023-09-19T14:14:13+08:00/docs/language/hugegraph-gremlin/2023-05-14T07:29:41-05:00/docs/contribution-guidelines/contribute/2023-09-09T20:50:32+08:00/docs/performance/hugegraph-benchmark-0.5.6/2023-05-14T22:31:02-05:00/docs/quickstart/hugegraph-server/2023-10-09T21:10:07+08:00/docs/introduction/readme/2023-06-18T14:57:33+08:00/docs/changelog/hugegraph-1.0.0-release-notes/2023-01-09T07:41:46+08:00/docs/clients/restful-api/2023-07-31T23:55:30+08:00/docs/clients/restful-api/schema/2023-05-14T19:35:13+08:00/docs/performance/api-preformance/hugegraph-api-0.5.6-rocksdb/2023-05-15T22:47:44-05:00/docs/config/config-option/2023-09-19T14:14:13+08:00/docs/guides/desgin-concept/2023-05-14T07:20:21-05:00/docs/download/download/2023-06-17T14:43:04+08:00/docs/language/hugegraph-example/2023-02-02T01:21:10+08:00/docs/clients/hugegraph-client/2023-01-01T16:16:43+08:00/docs/performance/api-preformance/2023-06-17T14:43:04+08:00/docs/quickstart/hugegraph-loader/2023-10-07T16:52:41+08:00/docs/clients/restful-api/propertykey/2023-05-19T05:15:56-05:00/docs/changelog/hugegraph-0.12.0-release-notes/2023-05-18T06:11:19-05:00/docs/contribution-guidelines/subscribe/2023-06-17T14:43:04+08:00/docs/performance/api-preformance/hugegraph-api-0.5.6-cassandra/2023-05-16T23:30:00-05:00/docs/config/config-authentication/2023-09-19T14:14:13+08:00/docs/clients/gremlin-console/2023-06-12T23:52:07+08:00/docs/guides/custom-plugin/2023-09-19T14:14:13+08:00/docs/performance/hugegraph-loader-performance/2023-05-18T00:34:48-05:00/docs/quickstart/2022-04-17T11:36:55+08:00/docs/contribution-guidelines/validate-release/2023-02-15T16:14:21+08:00/docs/clients/restful-api/vertexlabel/2023-05-19T04:03:23-05:00/docs/quickstart/hugegraph-hubble/2023-10-09T21:10:07+08:00/docs/guides/backup-restore/2023-05-14T07:26:12-05:00/docs/config/2022-04-17T11:36:55+08:00/docs/config/config-https/2023-05-19T05:04:16-05:00/docs/quickstart/hugegraph-client/2023-10-09T17:41:59+08:00/docs/clients/restful-api/edgelabel/2023-05-19T05:17:26-05:00/docs/contribution-guidelines/hugegraph-server-idea-setup/2023-06-25T21:06:07+08:00/docs/clients/2022-04-17T11:36:55+08:00/docs/config/config-computer/2023-01-01T16:16:43+08:00/docs/guides/faq/2023-05-14T07:28:41-05:00/docs/clients/restful-api/indexlabel/2023-05-19T05:18:17-05:00/docs/quickstart/hugegraph-tools/2023-10-09T17:41:59+08:00/docs/quickstart/hugegraph-computer/2023-10-09T17:41:59+08:00/docs/guides/2022-04-17T11:36:55+08:00/docs/clients/restful-api/rebuild/2022-05-09T18:43:53+08:00/docs/language/2022-04-17T11:36:55+08:00/docs/clients/restful-api/vertex/2023-06-04T23:04:47+08:00/docs/clients/restful-api/edge/2023-06-29T10:17:29+08:00/docs/performance/2022-04-17T11:36:55+08:00/docs/contribution-guidelines/2022-12-30T19:36:31+08:00/docs/clients/restful-api/traverser/2023-09-15T11:15:58+08:00/docs/changelog/2022-04-28T21:26:41+08:00/docs/clients/restful-api/rank/2022-09-15T12:59:59+08:00/docs/clients/restful-api/variable/2023-05-21T04:38:57-05:00/docs/clients/restful-api/graphs/2023-09-18T17:50:28+08:00/docs/clients/restful-api/task/2023-09-19T14:14:13+08:00/docs/clients/restful-api/gremlin/2023-05-21T04:39:11-05:00/docs/clients/restful-api/cypher/2023-07-31T23:55:30+08:00/docs/clients/restful-api/auth/2023-07-31T23:55:30+08:00/docs/clients/restful-api/other/2023-07-31T23:55:30+08:00/docs/2022-12-30T19:57:48+08:00/blog/news/2022-03-21T18:55:33+08:00/blog/releases/2022-03-21T18:55:33+08:00/blog/2018/10/06/easy-documentation-with-docsy/2022-03-21T18:55:33+08:00/blog/2018/10/06/the-second-blog-post/2022-03-21T18:55:33+08:00/blog/2018/01/04/another-great-release/2022-03-21T18:55:33+08:00/docs/cla/2022-03-21T19:51:14+08:00/docs/performance/hugegraph-benchmark-0.4.4/2022-09-15T12:59:59+08:00/docs/summary/2023-10-09T17:41:59+08:00/blog/2022-03-21T18:55:33+08:00/categories//community/2022-03-21T18:55:33+08:00/2023-01-15T13:44:01+00:00/search/2022-03-21T18:55:33+08:00/tags/ \ No newline at end of file +/docs/guides/architectural/2023-06-25T21:06:07+08:00/docs/config/config-guide/2023-11-01T21:52:52+08:00/docs/language/hugegraph-gremlin/2023-05-14T07:29:41-05:00/docs/contribution-guidelines/contribute/2023-09-09T20:50:32+08:00/docs/performance/hugegraph-benchmark-0.5.6/2023-05-14T22:31:02-05:00/docs/quickstart/hugegraph-server/2023-11-01T21:52:52+08:00/docs/introduction/readme/2023-06-18T14:57:33+08:00/docs/changelog/hugegraph-1.0.0-release-notes/2023-01-09T07:41:46+08:00/docs/clients/restful-api/2023-11-01T21:52:52+08:00/docs/clients/restful-api/schema/2023-05-14T19:35:13+08:00/docs/performance/api-preformance/hugegraph-api-0.5.6-rocksdb/2023-05-15T22:47:44-05:00/docs/config/config-option/2023-09-19T14:14:13+08:00/docs/guides/desgin-concept/2023-05-14T07:20:21-05:00/docs/download/download/2023-06-17T14:43:04+08:00/docs/language/hugegraph-example/2023-02-02T01:21:10+08:00/docs/clients/hugegraph-client/2023-01-01T16:16:43+08:00/docs/performance/api-preformance/2023-06-17T14:43:04+08:00/docs/quickstart/hugegraph-loader/2023-10-07T16:52:41+08:00/docs/clients/restful-api/propertykey/2023-05-19T05:15:56-05:00/docs/changelog/hugegraph-0.12.0-release-notes/2023-05-18T06:11:19-05:00/docs/contribution-guidelines/subscribe/2023-06-17T14:43:04+08:00/docs/performance/api-preformance/hugegraph-api-0.5.6-cassandra/2023-05-16T23:30:00-05:00/docs/config/config-authentication/2023-09-19T14:14:13+08:00/docs/clients/gremlin-console/2023-06-12T23:52:07+08:00/docs/guides/custom-plugin/2023-09-19T14:14:13+08:00/docs/performance/hugegraph-loader-performance/2023-05-18T00:34:48-05:00/docs/quickstart/2022-04-17T11:36:55+08:00/docs/contribution-guidelines/validate-release/2023-02-15T16:14:21+08:00/docs/clients/restful-api/vertexlabel/2023-05-19T04:03:23-05:00/docs/quickstart/hugegraph-hubble/2023-11-01T21:52:52+08:00/docs/guides/backup-restore/2023-05-14T07:26:12-05:00/docs/config/2022-04-17T11:36:55+08:00/docs/config/config-https/2023-05-19T05:04:16-05:00/docs/quickstart/hugegraph-client/2023-10-09T17:41:59+08:00/docs/clients/restful-api/edgelabel/2023-05-19T05:17:26-05:00/docs/contribution-guidelines/hugegraph-server-idea-setup/2023-06-25T21:06:07+08:00/docs/clients/2022-04-17T11:36:55+08:00/docs/config/config-computer/2023-01-01T16:16:43+08:00/docs/guides/faq/2023-05-14T07:28:41-05:00/docs/clients/restful-api/indexlabel/2023-05-19T05:18:17-05:00/docs/quickstart/hugegraph-tools/2023-10-09T17:41:59+08:00/docs/quickstart/hugegraph-computer/2023-10-09T17:41:59+08:00/docs/guides/2022-04-17T11:36:55+08:00/docs/clients/restful-api/rebuild/2022-05-09T18:43:53+08:00/docs/language/2022-04-17T11:36:55+08:00/docs/clients/restful-api/vertex/2023-06-04T23:04:47+08:00/docs/clients/restful-api/edge/2023-06-29T10:17:29+08:00/docs/performance/2022-04-17T11:36:55+08:00/docs/contribution-guidelines/2022-12-30T19:36:31+08:00/docs/clients/restful-api/traverser/2023-09-15T11:15:58+08:00/docs/changelog/2022-04-28T21:26:41+08:00/docs/clients/restful-api/rank/2022-09-15T12:59:59+08:00/docs/clients/restful-api/variable/2023-05-21T04:38:57-05:00/docs/clients/restful-api/graphs/2023-09-18T17:50:28+08:00/docs/clients/restful-api/task/2023-09-19T14:14:13+08:00/docs/clients/restful-api/gremlin/2023-05-21T04:39:11-05:00/docs/clients/restful-api/cypher/2023-07-31T23:55:30+08:00/docs/clients/restful-api/auth/2023-07-31T23:55:30+08:00/docs/clients/restful-api/other/2023-07-31T23:55:30+08:00/docs/2022-12-30T19:57:48+08:00/blog/news/2022-03-21T18:55:33+08:00/blog/releases/2022-03-21T18:55:33+08:00/blog/2018/10/06/easy-documentation-with-docsy/2022-03-21T18:55:33+08:00/blog/2018/10/06/the-second-blog-post/2022-03-21T18:55:33+08:00/blog/2018/01/04/another-great-release/2022-03-21T18:55:33+08:00/docs/cla/2022-03-21T19:51:14+08:00/docs/performance/hugegraph-benchmark-0.4.4/2022-09-15T12:59:59+08:00/docs/summary/2023-10-09T17:41:59+08:00/blog/2022-03-21T18:55:33+08:00/categories//community/2022-03-21T18:55:33+08:00/2023-01-15T13:44:01+00:00/search/2022-03-21T18:55:33+08:00/tags/ \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml index fe106a319..2f5904c4e 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -1 +1 @@ -/en/sitemap.xml2023-10-09T21:10:07+08:00/cn/sitemap.xml2023-10-09T21:10:07+08:00 \ No newline at end of file +/en/sitemap.xml2023-11-01T21:52:52+08:00/cn/sitemap.xml2023-11-01T21:52:52+08:00 \ No newline at end of file