diff --git a/cn/docs/_print/index.html b/cn/docs/_print/index.html index 21087bf97..a7aa57358 100644 --- a/cn/docs/_print/index.html +++ b/cn/docs/_print/index.html @@ -1,7 +1,7 @@ Documentation | HugeGraph

This is the multi-page printable view of this section. -Click here to print.

Return to the regular view of this page.

Documentation

欢迎阅读HugeGraph文档

1 - Introduction with HugeGraph

Summary

HugeGraph是一款易用、高效、通用的开源图数据库系统(Graph Database,GitHub项目地址), +Click here to print.

Return to the regular view of this page.

Documentation

欢迎阅读HugeGraph文档

1 - Introduction with HugeGraph

Summary

HugeGraph是一款易用、高效、通用的开源图数据库系统(Graph Database,GitHub项目地址), 实现了Apache TinkerPop3框架及完全兼容Gremlin查询语言, 具备完善的工具链组件,助力用户轻松构建基于图数据库之上的应用和产品。HugeGraph支持百亿以上的顶点和边快速导入,并提供毫秒级的关联关系查询能力(OLTP), 并可与Hadoop、Spark等大数据平台集成以进行离线分析(OLAP)。

HugeGraph典型应用场景包括深度关系探索、关联分析、路径搜索、特征抽取、数据聚类、社区检测、 @@ -122,7 +122,7 @@

curl请求RESTfulAPI

echo `curl -o /dev/null -s -w %{http_code} "http://localhost:8080/graphs/hugegraph/graph/vertices"`
 

返回结果200,代表server启动正常

6.2 请求Server

HugeGraphServer的RESTful API包括多种类型的资源,典型的包括graph、schema、gremlin、traverser和task,

6.2.1 获取hugegraph的顶点及相关属性
curl http://localhost:8080/graphs/hugegraph/graph/vertices 
 

说明

  1. 由于图的点和边很多,对于 list 型的请求,比如获取所有顶点,获取所有边等,Server 会将数据压缩再返回, -所以使用 curl 时得到一堆乱码,可以重定向至 gunzip 进行解压。推荐使用 Chrome 浏览器 + Restlet 插件发送 HTTP 请求进行测试。

    curl "http://localhost:8080/graphs/hugegraph/graph/vertices" | gunzip
    +所以使用 curl 时得到一堆乱码,可以重定向至 gunzip 进行解压。推荐使用 Chrome 浏览器 + Restlet 插件发送 HTTP 请求进行测试。

    curl "http://localhost:8080/graphs/hugegraph/graph/vertices" | gunzip
     
  2. 当前HugeGraphServer的默认配置只能是本机访问,可以修改配置,使其能在其他机器访问。

    vim conf/rest-server.properties
     
     restserver.url=http://0.0.0.0:8080
    @@ -177,7 +177,7 @@
     }
     

详细的API请参考RESTful-API文档

7 停止Server

$cd hugegraph-${version}
 $bin/stop-hugegraph.sh
-

3.2 - HugeGraph-Loader Quick Start

1 HugeGraph-Loader概述

HugeGraph-Loader 是 HugeGragh 的数据导入组件,能够将多种数据源的数据转化为图的顶点和边并批量导入到图数据库中。

目前支持的数据源包括:

本地磁盘文件和 HDFS 文件支持断点续传。

后面会具体说明。

注意:使用 HugeGraph-Loader 需要依赖 HugeGraph Server 服务,下载和启动 Server 请参考 HugeGraph-Server Quick Start

2 获取 HugeGraph-Loader

有两种方式可以获取 HugeGraph-Loader:

2.1 下载已编译的压缩包

下载最新版本的 HugeGraph-Loader release 包:

wget https://github.com/hugegraph/hugegraph-loader/releases/download/v${version}/hugegraph-loader-${version}.tar.gz
+

3.2 - HugeGraph-Loader Quick Start

1 HugeGraph-Loader概述

HugeGraph-Loader 是 HugeGraph 的数据导入组件,能够将多种数据源的数据转化为图的顶点和边并批量导入到图数据库中。

目前支持的数据源包括:

本地磁盘文件和 HDFS 文件支持断点续传。

后面会具体说明。

注意:使用 HugeGraph-Loader 需要依赖 HugeGraph Server 服务,下载和启动 Server 请参考 HugeGraph-Server Quick Start

2 获取 HugeGraph-Loader

有两种方式可以获取 HugeGraph-Loader:

2.1 下载已编译的压缩包

下载最新版本的 HugeGraph-Loader release 包:

wget https://github.com/hugegraph/hugegraph-loader/releases/download/v${version}/hugegraph-loader-${version}.tar.gz
 tar zxvf hugegraph-loader-${version}.tar.gz
 

2.2 克隆源码编译安装

克隆最新版本的 HugeGraph-Loader 源码包:

$ git clone https://github.com/hugegraph/hugegraph-loader.git
 

由于Oracle ojdbc license的限制,需要手动安装ojdbc到本地maven仓库。 @@ -493,7 +493,7 @@ ] }

映射文件 1.0 版本是以顶点和边为中心,设置输入源;而 2.0 版本是以输入源为中心,设置顶点和边映射。有些输入源(比如一个文件)既能生成顶点,也能生成边,如果用 1.0 版的格式写,就需要在 vertex 和 egde 映射块中各写一次 input 块,这两次的 input 块是完全一样的;而 2.0 版本只需要写一次 input。所以 2.0 版相比于 1.0 版,能省掉一些 input 的重复书写。

在 hugegraph-loader-{version} 的 bin 目录下,有一个脚本工具 mapping-convert.sh 能直接将 1.0 版本的映射文件转换为 2.0 版本的,使用方式如下:

bin/mapping-convert.sh struct.json
-

会在 struct.json 的同级目录下生成一个 struct-v2.json。

3.3.2 输入源

输入源目前分为三类:FILE、HDFS、JDBC,由type节点区分,我们称为本地文件输入源、HDFS 输入源和 JDBC 输入源,下面分别介绍。

3.3.2.1 本地文件输入源
3.3.2.2 HDFS 输入源

上述本地文件输入源的节点及含义这里基本都适用,下面仅列出 HDFS 输入源不一样的和特有的节点。

3.3.2.3 JDBC 输入源

前面说到过支持多种关系型数据库,但由于它们的映射结构非常相似,故统称为 JDBC 输入源,然后用vendor节点区分不同的数据库。

MYSQL

节点固定值或常见值
vendorMYSQL
drivercom.mysql.cj.jdbc.Driver
urljdbc:mysql://127.0.0.1:3306

schema: 可空,若填写必须与database的值一样

POSTGRESQL

节点固定值或常见值
vendorPOSTGRESQL
driverorg.postgresql.Driver
urljdbc:postgresql://127.0.0.1:5432

schema: 可空,默认值为“public”

ORACLE

节点固定值或常见值
vendorORACLE
driveroracle.jdbc.driver.OracleDriver
urljdbc:oracle:thin:@127.0.0.1:1521

schema: 可空,默认值与用户名相同

SQLSERVER

节点固定值或常见值
vendorSQLSERVER
drivercom.microsoft.sqlserver.jdbc.SQLServerDriver
urljdbc:sqlserver://127.0.0.1:1433

schema: 必填

3.3.1 顶点和边映射

顶点和边映射的节点(JSON 文件中的一个 key)有很多相同的部分,下面先介绍相同部分,再分别介绍顶点映射边映射的特有节点。

相同部分的节点

更新策略支持8种 : (需要全大写)

  1. 数值累加 : SUM
  2. 两个数字/日期取更大的: BIGGER
  3. 两个数字/日期取更小: SMALLER
  4. Set属性取并集: UNION
  5. Set属性取交集: INTERSECTION
  6. List属性追加元素: APPEND
  7. List/Set属性删除元素: ELIMINATE
  8. 覆盖已有属性: OVERRIDE

注意: 如果新导入的属性值为空, 会采用已有的旧数据而不会采用空值, 效果可以参考如下示例

// JSON文件中以如下方式指定更新策略
+

会在 struct.json 的同级目录下生成一个 struct-v2.json。

3.3.2 输入源

输入源目前分为三类:FILE、HDFS、JDBC,由type节点区分,我们称为本地文件输入源、HDFS 输入源和 JDBC 输入源,下面分别介绍。

3.3.2.1 本地文件输入源
3.3.2.2 HDFS 输入源

上述本地文件输入源的节点及含义这里基本都适用,下面仅列出 HDFS 输入源不一样的和特有的节点。

3.3.2.3 JDBC 输入源

前面说到过支持多种关系型数据库,但由于它们的映射结构非常相似,故统称为 JDBC 输入源,然后用vendor节点区分不同的数据库。

MYSQL

节点固定值或常见值
vendorMYSQL
drivercom.mysql.cj.jdbc.Driver
urljdbc:mysql://127.0.0.1:3306

schema: 可空,若填写必须与database的值一样

POSTGRESQL

节点固定值或常见值
vendorPOSTGRESQL
driverorg.postgresql.Driver
urljdbc:postgresql://127.0.0.1:5432

schema: 可空,默认值为“public”

ORACLE

节点固定值或常见值
vendorORACLE
driveroracle.jdbc.driver.OracleDriver
urljdbc:oracle:thin:@127.0.0.1:1521

schema: 可空,默认值与用户名相同

SQLSERVER

节点固定值或常见值
vendorSQLSERVER
drivercom.microsoft.sqlserver.jdbc.SQLServerDriver
urljdbc:sqlserver://127.0.0.1:1433

schema: 必填

3.3.1 顶点和边映射

顶点和边映射的节点(JSON 文件中的一个 key)有很多相同的部分,下面先介绍相同部分,再分别介绍顶点映射边映射的特有节点。

相同部分的节点

更新策略支持8种 : (需要全大写)

  1. 数值累加 : SUM
  2. 两个数字/日期取更大的: BIGGER
  3. 两个数字/日期取更小: SMALLER
  4. Set属性取并集: UNION
  5. Set属性取交集: INTERSECTION
  6. List属性追加元素: APPEND
  7. List/Set属性删除元素: ELIMINATE
  8. 覆盖已有属性: OVERRIDE

注意: 如果新导入的属性值为空, 会采用已有的旧数据而不会采用空值, 效果可以参考如下示例

// JSON文件中以如下方式指定更新策略
 {
   "vertices": [
     {
@@ -531,7 +531,7 @@
 失败文件中,用户对失败文件中的数据行修改后,设置 –reload-failure 为 true 即可把这些"失败文件"也当作输入源进行导入(不影响正常的文件的导入),
 当然如果修改后的数据行仍然有问题,则会被再次记录到失败文件中(不用担心会有重复行)。

每个顶点映射或边映射有数据插入失败时都会产生自己的失败文件,失败文件又分为解析失败文件(后缀 .parse-error)和插入失败文件(后缀 .insert-error), 它们被保存在 ${struct}/current 目录下。比如映射文件中有一个顶点映射 person 和边映射 knows,它们各有一些错误行,当 Loader 退出后,在 -${struct}/current 目录下会看到如下文件:

  • person-b4cd32ab.parse-error: 顶点映射 person 解析错误的数据
  • person-b4cd32ab.insert-error: 顶点映射 person 插入错误的数据
  • knows-eb6b2bac.parse-error: 边映射 knows 解析错误的数据
  • knows-eb6b2bac.insert-error: 边映射 knows 插入错误的数据

.parse-error 和 .insert-error 并不总是一起存在的,只有存在解析出错的行才会有 .parse-error 文件,只有存在插入出错的行才会有 .insert-error 文件。

3.4.3 logs 目录文件说明

程序执行过程中各日志及错误数据会写入 hugegraph-loader.log 文件中。

3.4.4 执行命令

运行 bin/hugeloader 并传入参数

bin/hugegraph-loader -g {GRAPH_NAME} -f ${INPUT_DESC_FILE} -s ${SCHEMA_FILE} -h {HOST} -p {PORT}
+${struct}/current 目录下会看到如下文件:

  • person-b4cd32ab.parse-error: 顶点映射 person 解析错误的数据
  • person-b4cd32ab.insert-error: 顶点映射 person 插入错误的数据
  • knows-eb6b2bac.parse-error: 边映射 knows 解析错误的数据
  • knows-eb6b2bac.insert-error: 边映射 knows 插入错误的数据

.parse-error 和 .insert-error 并不总是一起存在的,只有存在解析出错的行才会有 .parse-error 文件,只有存在插入出错的行才会有 .insert-error 文件。

3.4.3 logs 目录文件说明

程序执行过程中各日志及错误数据会写入 hugegraph-loader.log 文件中。

3.4.4 执行命令

运行 bin/hugegraph-loader 并传入参数

bin/hugegraph-loader -g {GRAPH_NAME} -f ${INPUT_DESC_FILE} -s ${SCHEMA_FILE} -h {HOST} -p {PORT}
 

4 完整示例

下面给出的是 hugegraph-loader 包中 example 目录下的例子。

4.1 准备数据

顶点文件:example/file/vertex_person.csv

marko,29,Beijing
 vadas,27,Hongkong
 josh,32,Beijing
@@ -647,7 +647,7 @@
 --deploy-mode cluster --name spark-hugegraph-loader --file ./hugegraph.json \
 --username admin --token admin --host xx.xx.xx.xx --port 8093 \
 --graph graph-test --num-executors 6 --executor-cores 16 --executor-memory 15g
-

3.3 - HugeGraph-Tools Quick Start

1 HugeGraph-Tools概述

HugeGraph-Tools 是 HugeGragh 的自动化部署、管理和备份/还原组件。

2 获取 HugeGraph-Tools

有两种方式可以获取 HugeGraph-Tools:

2.1 下载二进制tar包

下载最新版本的 HugeGraph-Tools 包:

wget https://github.com/hugegraph/hugegraph-tools/releases/download/v${version}/hugegraph-tools-${version}.tar.gz
+

3.3 - HugeGraph-Tools Quick Start

1 HugeGraph-Tools概述

HugeGraph-Tools 是 HugeGraph 的自动化部署、管理和备份/还原组件。

2 获取 HugeGraph-Tools

有两种方式可以获取 HugeGraph-Tools:

2.1 下载二进制tar包

下载最新版本的 HugeGraph-Tools 包:

wget https://github.com/hugegraph/hugegraph-tools/releases/download/v${version}/hugegraph-tools-${version}.tar.gz
 tar zxvf hugegraph-tools-${version}.tar.gz
 

2.2 下载源码编译安装

下载最新版本的 HugeGraph-Tools 源码包:

$ git clone https://github.com/hugegraph/hugegraph-tools.git
 

编译生成 tar 包:

cd hugegraph-tools
@@ -1287,7 +1287,7 @@
         hugeClient.close();
     }
 }
-

4.4 运行Example

运行Example之前需要启动Server, 启动过程见HugeGraph-Server Quick Start

4.5 Example示例说明

示例说明见HugeGraph-Client基本API介绍

3.6 - HugeGraph-Spark Quick Start

Note: This project is archived now, consider use hugegraph-computer instead

1 HugeGraph-Spark概述

HugeGraph-Spark 是一个连接 HugeGraph 和 Spark GraphX 的工具,能够读取 HugeGraph 中的数据并转换成 Spark GraphX 的 RDD,然后执行 GraphX 中的各种图算法。

2 环境依赖

在使用 HugeGraph-Spark 前,需要依赖 HugeGraph Server 服务,下载和启动 Server 请参考 HugeGraph-Server Quick Start。另外,由于 HugeGraph-Spark 需要使用 Spark GraphX,所以还需要下载 spark,本文的示例使用的是 apache-spark-2.1.1。

wget https://archive.apache.org/dist/spark/spark-2.1.1/spark-2.1.1-bin-hadoop2.7.tgz
+

4.4 运行Example

运行Example之前需要启动Server, 启动过程见HugeGraph-Server Quick Start

4.5 Example示例说明

示例说明见HugeGraph-Client基本API介绍

3.6 - HugeGraph-Spark Quick Start

Note: HugeGraph-Spark 已经停止维护, 不再更新, 请转向使用 hugegraph-computer, 感谢理解

1 HugeGraph-Spark概述 (Deprecated)

HugeGraph-Spark 是一个连接 HugeGraph 和 Spark GraphX 的工具,能够读取 HugeGraph 中的数据并转换成 Spark GraphX 的 RDD,然后执行 GraphX 中的各种图算法。

2 环境依赖

在使用 HugeGraph-Spark 前,需要依赖 HugeGraph Server 服务,下载和启动 Server 请参考 HugeGraph-Server Quick Start。另外,由于 HugeGraph-Spark 需要使用 Spark GraphX,所以还需要下载 spark,本文的示例使用的是 apache-spark-2.1.1。

wget https://archive.apache.org/dist/spark/spark-2.1.1/spark-2.1.1-bin-hadoop2.7.tgz
 tar -zxvf spark-2.1.1-bin-hadoop2.7.tgz
 cd spark-2.1.1-bin-hadoop2.7
 

然后将 hugegraph-spark 的 jar 包拷贝到 spark 的 jars 目录下

cp {dir}/hugegraph-spark-0.9.0.jar jars
@@ -1505,7 +1505,7 @@
 

停止 Server,初始化执行 init-store.sh(为新的图创建数据库),重新启动 Server

$ bin/stop-hugegraph.sh
 $ bin/init-store.sh
 $ bin/start-hugegraph.sh
-

4.2 - HugeGraph 配置项

Gremlin Server 配置项

对应配置文件gremlin-server.yaml

config optiondefault valuedescrition
host127.0.0.1The host or ip of Gremlin Server.
port8182The listening port of Gremlin Server.
graphshugegraph: conf/hugegraph.propertiesThe map of graphs with name and config file path.
scriptEvaluationTimeout30000The timeout for gremlin script execution(millisecond).
channelizerorg.apache.tinkerpop.gremlin.server.channel.HttpChannelizerIndicates the protocol which the Gremlin Server provides service.
authenticationauthenticator: com.baidu.hugegraph.auth.StandardAuthenticator, config: {tokens: conf/rest-server.properties}The authenticator and config(contains tokens path) of authentication mechanism.

Rest Server & API 配置项

对应配置文件rest-server.properties

config optiondefault valuedescrition
graphs[hugegraph:conf/hugegraph.properties]The map of graphs’ name and config file.
server.idserver-1The id of rest server, used for license verification.
server.rolemasterThe role of nodes in the cluster, available types are [master, worker, computer]
restserver.urlhttp://127.0.0.1:8080The url for listening of rest server.
ssl.keystore_fileserver.keystoreThe path of server keystore file used when https protocol is enabled.
ssl.keystore_passwordThe password of the path of the server keystore file used when the https protocol is enabled.
restserver.max_worker_threads2 * CPUsThe maximum worker threads of rest server.
restserver.min_free_memory64The minimum free memory(MB) of rest server, requests will be rejected when the available memory of system is lower than this value.
restserver.request_timeout30The time in seconds within which a request must complete, -1 means no timeout.
restserver.connection_idle_timeout30The time in seconds to keep an inactive connection alive, -1 means no timeout.
restserver.connection_max_requests256The max number of HTTP requests allowed to be processed on one keep-alive connection, -1 means unlimited.
gremlinserver.urlhttp://127.0.0.1:8182The url of gremlin server.
gremlinserver.max_route8The max route number for gremlin server.
gremlinserver.timeout30The timeout in seconds of waiting for gremlin server.
batch.max_edges_per_batch500The maximum number of edges submitted per batch.
batch.max_vertices_per_batch500The maximum number of vertices submitted per batch.
batch.max_write_ratio50The maximum thread ratio for batch writing, only take effect if the batch.max_write_threads is 0.
batch.max_write_threads0The maximum threads for batch writing, if the value is 0, the actual value will be set to batch.max_write_ratio * restserver.max_worker_threads.
auth.authenticatorThe class path of authenticator implemention. e.g., com.baidu.hugegraph.auth.StandardAuthenticator, or com.baidu.hugegraph.auth.ConfigAuthenticator.
auth.admin_token162f7848-0b6d-4faf-b557-3a0797869c55Token for administrator operations, only for com.baidu.hugegraph.auth.ConfigAuthenticator.
auth.graph_storehugegraphThe name of graph used to store authentication information, like users, only for com.baidu.hugegraph.auth.StandardAuthenticator.
auth.user_tokens[hugegraph:9fd95c9c-711b-415b-b85f-d4df46ba5c31]The map of user tokens with name and password, only for com.baidu.hugegraph.auth.ConfigAuthenticator.
auth.audit_log_rate1000.0The max rate of audit log output per user, default value is 1000 records per second.
auth.cache_capacity10240The max cache capacity of each auth cache item.
auth.cache_expire600The expiration time in seconds of vertex cache.
auth.remote_urlIf the address is empty, it provide auth service, otherwise it is auth client and also provide auth service through rpc forwarding. The remote url can be set to multiple addresses, which are concat by ‘,’.
auth.token_expire86400The expiration time in seconds after token created
auth.token_secretFXQXbJtbCLxODc6tGci732pkH1cyf8QgSecret key of HS256 algorithm.
exception.allow_tracefalseWhether to allow exception trace stack.

基本配置项

基本配置项及后端配置项对应配置文件:{graph-name}.properties,如hugegraph.properties

config optiondefault valuedescrition
gremlin.graphcom.baidu.hugegraph.HugeFactoryGremlin entrance to create graph.
backendrocksdbThe data store type, available values are [memory, rocksdb, cassandra, scylladb, hbase, mysql].
serializerbinaryThe serializer for backend store, available values are [text, binary, cassandra, hbase, mysql].
storehugegraphThe database name like Cassandra Keyspace.
store.connection_detect_interval600The interval in seconds for detecting connections, if the idle time of a connection exceeds this value, detect it and reconnect if needed before using, value 0 means detecting every time.
store.graphgThe graph table name, which store vertex, edge and property.
store.schemamThe schema table name, which store meta data.
store.systemsThe system table name, which store system data.
schema.illegal_name_regex.\s+$|~.The regex specified the illegal format for schema name.
schema.cache_capacity10000The max cache size(items) of schema cache.
vertex.cache_typel2The type of vertex cache, allowed values are [l1, l2].
vertex.cache_capacity10000000The max cache size(items) of vertex cache.
vertex.cache_expire600The expire time in seconds of vertex cache.
vertex.check_customized_id_existfalseWhether to check the vertices exist for those using customized id strategy.
vertex.default_labelvertexThe default vertex label.
vertex.tx_capacity10000The max size(items) of vertices(uncommitted) in transaction.
vertex.check_adjacent_vertex_existfalseWhether to check the adjacent vertices of edges exist.
vertex.lazy_load_adjacent_vertextrueWhether to lazy load adjacent vertices of edges.
vertex.part_edge_commit_size5000Whether to enable the mode to commit part of edges of vertex, enabled if commit size > 0, 0 means disabled.
vertex.encode_primary_key_numbertrueWhether to encode number value of primary key in vertex id.
vertex.remove_left_index_at_overwritefalseWhether remove left index at overwrite.
edge.cache_typel2The type of edge cache, allowed values are [l1, l2].
edge.cache_capacity1000000The max cache size(items) of edge cache.
edge.cache_expire600The expiration time in seconds of edge cache.
edge.tx_capacity10000The max size(items) of edges(uncommitted) in transaction.
query.page_size500The size of each page when querying by paging.
query.batch_size1000The size of each batch when querying by batch.
query.ignore_invalid_datatrueWhether to ignore invalid data of vertex or edge.
query.index_intersect_threshold1000The maximum number of intermediate results to intersect indexes when querying by multiple single index properties.
query.ramtable_edges_capacity20000000The maximum number of edges in ramtable, include OUT and IN edges.
query.ramtable_enablefalseWhether to enable ramtable for query of adjacent edges.
query.ramtable_vertices_capacity10000000The maximum number of vertices in ramtable, generally the largest vertex id is used as capacity.
query.optimize_aggregate_by_indexfalseWhether to optimize aggregate query(like count) by index.
oltp.concurrent_depth10The min depth to enable concurrent oltp algorithm.
oltp.concurrent_threads10Thread number to concurrently execute oltp algorithm.
oltp.collection_typeECThe implementation type of collections used in oltp algorithm.
rate_limit.read0The max rate(times/s) to execute query of vertices/edges.
rate_limit.write0The max rate(items/s) to add/update/delete vertices/edges.
task.wait_timeout10Timeout in seconds for waiting for the task to complete,such as when truncating or clearing the backend.
task.input_size_limit16777216The job input size limit in bytes.
task.result_size_limit16777216The job result size limit in bytes.
task.sync_deletionfalseWhether to delete schema or expired data synchronously.
task.ttl_delete_batch1The batch size used to delete expired data.
computer.config/conf/computer.yamlThe config file path of computer job.
search.text_analyzerikanalyzerChoose a text analyzer for searching the vertex/edge properties, available type are [word, ansj, hanlp, smartcn, jieba, jcseg, mmseg4j, ikanalyzer].
search.text_analyzer_modesmartSpecify the mode for the text analyzer, the available mode of analyzer are {word: [MaximumMatching, ReverseMaximumMatching, MinimumMatching, ReverseMinimumMatching, BidirectionalMaximumMatching, BidirectionalMinimumMatching, BidirectionalMaximumMinimumMatching, FullSegmentation, MinimalWordCount, MaxNgramScore, PureEnglish], ansj: [BaseAnalysis, IndexAnalysis, ToAnalysis, NlpAnalysis], hanlp: [standard, nlp, index, nShort, shortest, speed], smartcn: [], jieba: [SEARCH, INDEX], jcseg: [Simple, Complex], mmseg4j: [Simple, Complex, MaxWord], ikanalyzer: [smart, max_word]}.
snowflake.datecenter_id0The datacenter id of snowflake id generator.
snowflake.force_stringfalseWhether to force the snowflake long id to be a string.
snowflake.worker_id0The worker id of snowflake id generator.
raft.modefalseWhether the backend storage works in raft mode.
raft.safe_readfalseWhether to use linearly consistent read.
raft.use_snapshotfalseWhether to use snapshot.
raft.endpoint127.0.0.1:8281The peerid of current raft node.
raft.group_peers127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283The peers of current raft group.
raft.path./raft-logThe log path of current raft node.
raft.use_replicator_pipelinetrueWhether to use replicator line, when turned on it multiple logs can be sent in parallel, and the next log doesn’t have to wait for the ack message of the current log to be sent.
raft.election_timeout10000Timeout in milliseconds to launch a round of election.
raft.snapshot_interval3600The interval in seconds to trigger snapshot save.
raft.backend_threadscurrent CPU vcoresThe thread number used to apply task to bakcend.
raft.read_index_threads8The thread number used to execute reading index.
raft.apply_batch1The apply batch size to trigger disruptor event handler.
raft.queue_size16384The disruptor buffers size for jraft RaftNode, StateMachine and LogManager.
raft.queue_publish_timeout60The timeout in second when publish event into disruptor.
raft.rpc_threads80The rpc threads for jraft RPC layer.
raft.rpc_connect_timeout5000The rpc connect timeout for jraft rpc.
raft.rpc_timeout60000The rpc timeout for jraft rpc.
raft.rpc_buf_low_water_mark10485760The ChannelOutboundBuffer’s low water mark of netty, when buffer size less than this size, the method ChannelOutboundBuffer.isWritable() will return true, it means that low downstream pressure or good network.
raft.rpc_buf_high_water_mark20971520The ChannelOutboundBuffer’s high water mark of netty, only when buffer size exceed this size, the method ChannelOutboundBuffer.isWritable() will return false, it means that the downstream pressure is too great to process the request or network is very congestion, upstream needs to limit rate at this time.
raft.read_strategyReadOnlyLeaseBasedThe linearizability of read strategy.

RPC server 配置

config optiondefault valuedescrition
rpc.client_connect_timeout20The timeout(in seconds) of rpc client connect to rpc server.
rpc.client_load_balancerconsistentHashThe rpc client uses a load-balancing algorithm to access multiple rpc servers in one cluster. Default value is ‘consistentHash’, means forwording by request parameters.
rpc.client_read_timeout40The timeout(in seconds) of rpc client read from rpc server.
rpc.client_reconnect_period10The period(in seconds) of rpc client reconnect to rpc server.
rpc.client_retries3Failed retry number of rpc client calls to rpc server.
rpc.config_order999Sofa rpc configuration file loading order, the larger the more later loading.
rpc.logger_implcom.alipay.sofa.rpc.log.SLF4JLoggerImplSofa rpc log implementation class.
rpc.protocolboltRpc communication protocol, client and server need to be specified the same value.
rpc.remote_urlThe remote urls of rpc peers, it can be set to multiple addresses, which are concat by ‘,’, empty value means not enabled.
rpc.server_adaptive_portfalseWhether the bound port is adaptive, if it’s enabled, when the port is in use, automatically +1 to detect the next available port. Note that this process is not atomic, so there may still be port conflicts.
rpc.server_hostThe hosts/ips bound by rpc server to provide services, empty value means not enabled.
rpc.server_port8090The port bound by rpc server to provide services.
rpc.server_timeout30The timeout(in seconds) of rpc server execution.

Cassandra 后端配置项

config optiondefault valuedescrition
backendMust be set to cassandra.
serializerMust be set to cassandra.
cassandra.hostlocalhostThe seeds hostname or ip address of cassandra cluster.
cassandra.port9042The seeds port address of cassandra cluster.
cassandra.connect_timeout5The cassandra driver connect server timeout(seconds).
cassandra.read_timeout20The cassandra driver read from server timeout(seconds).
cassandra.keyspace.strategySimpleStrategyThe replication strategy of keyspace, valid value is SimpleStrategy or NetworkTopologyStrategy.
cassandra.keyspace.replication[3]The keyspace replication factor of SimpleStrategy, like ‘[3]’.Or replicas in each datacenter of NetworkTopologyStrategy, like ‘[dc1:2,dc2:1]’.
cassandra.usernameThe username to use to login to cassandra cluster.
cassandra.passwordThe password corresponding to cassandra.username.
cassandra.compression_typenoneThe compression algorithm of cassandra transport: none/snappy/lz4.
cassandra.jmx_port=71997199The port of JMX API service for cassandra.
cassandra.aggregation_timeout43200The timeout in seconds of waiting for aggregation.

ScyllaDB 后端配置项

config optiondefault valuedescrition
backendMust be set to scylladb.
serializerMust be set to scylladb.

其它与 Cassandra 后端一致。

RocksDB 后端配置项

config optiondefault valuedescrition
backendMust be set to rocksdb.
serializerMust be set to binary.
rocksdb.data_disks[]The optimized disks for storing data of RocksDB. The format of each element: STORE/TABLE: /path/disk.Allowed keys are [g/vertex, g/edge_out, g/edge_in, g/vertex_label_index, g/edge_label_index, g/range_int_index, g/range_float_index, g/range_long_index, g/range_double_index, g/secondary_index, g/search_index, g/shard_index, g/unique_index, g/olap]
rocksdb.data_pathrocksdb-dataThe path for storing data of RocksDB.
rocksdb.wal_pathrocksdb-dataThe path for storing WAL of RocksDB.
rocksdb.allow_mmap_readsfalseAllow the OS to mmap file for reading sst tables.
rocksdb.allow_mmap_writesfalseAllow the OS to mmap file for writing.
rocksdb.block_cache_capacity8388608The amount of block cache in bytes that will be used by RocksDB, 0 means no block cache.
rocksdb.bloom_filter_bits_per_key-1The bits per key in bloom filter, a good value is 10, which yields a filter with ~ 1% false positive rate, -1 means no bloom filter.
rocksdb.bloom_filter_block_based_modefalseUse block based filter rather than full filter.
rocksdb.bloom_filter_whole_key_filteringtrueTrue if place whole keys in the bloom filter, else place the prefix of keys.
rocksdb.bottommost_compressionNO_COMPRESSIONThe compression algorithm for the bottommost level of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
rocksdb.bulkload_modefalseSwitch to the mode to bulk load data into RocksDB.
rocksdb.cache_index_and_filter_blocksfalseIndicating if we’d put index/filter blocks to the block cache.
rocksdb.compaction_styleLEVELSet compaction style for RocksDB: LEVEL/UNIVERSAL/FIFO.
rocksdb.compressionSNAPPY_COMPRESSIONThe compression algorithm for compressing blocks of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
rocksdb.compression_per_level[NO_COMPRESSION, NO_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION]The compression algorithms for different levels of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
rocksdb.delayed_write_rate16777216The rate limit in bytes/s of user write requests when need to slow down if the compaction gets behind.
rocksdb.log_levelINFOThe info log level of RocksDB.
rocksdb.max_background_jobs8Maximum number of concurrent background jobs, including flushes and compactions.
rocksdb.level_compaction_dynamic_level_bytesfalseWhether to enable level_compaction_dynamic_level_bytes, if it’s enabled we give max_bytes_for_level_multiplier a priority against max_bytes_for_level_base, the bytes of base level is dynamic for a more predictable LSM tree, it is useful to limit worse case space amplification. Turning this feature on/off for an existing DB can cause unexpected LSM tree structure so it’s not recommended.
rocksdb.max_bytes_for_level_base536870912The upper-bound of the total size of level-1 files in bytes.
rocksdb.max_bytes_for_level_multiplier10.0The ratio between the total size of level (L+1) files and the total size of level L files for all L.
rocksdb.max_open_files-1The maximum number of open files that can be cached by RocksDB, -1 means no limit.
rocksdb.max_subcompactions4The value represents the maximum number of threads per compaction job.
rocksdb.max_write_buffer_number6The maximum number of write buffers that are built up in memory.
rocksdb.max_write_buffer_number_to_maintain0The total maximum number of write buffers to maintain in memory.
rocksdb.min_write_buffer_number_to_merge2The minimum number of write buffers that will be merged together.
rocksdb.num_levels7Set the number of levels for this database.
rocksdb.optimize_filters_for_hitsfalseThis flag allows us to not store filters for the last level.
rocksdb.optimize_modetrueOptimize for heavy workloads and big datasets.
rocksdb.pin_l0_filter_and_index_blocks_in_cachefalseIndicating if we’d put index/filter blocks to the block cache.
rocksdb.sst_pathThe path for ingesting SST file into RocksDB.
rocksdb.target_file_size_base67108864The target file size for compaction in bytes.
rocksdb.target_file_size_multiplier1The size ratio between a level L file and a level (L+1) file.
rocksdb.use_direct_io_for_flush_and_compactionfalseEnable the OS to use direct read/writes in flush and compaction.
rocksdb.use_direct_readsfalseEnable the OS to use direct I/O for reading sst tables.
rocksdb.write_buffer_size134217728Amount of data in bytes to build up in memory.
rocksdb.max_manifest_file_size104857600The max size of manifest file in bytes.
rocksdb.skip_stats_update_on_db_openfalseWhether to skip statistics update when opening the database, setting this flag true allows us to not update statistics.
rocksdb.max_file_opening_threads16The max number of threads used to open files.
rocksdb.max_total_wal_size0Total size of WAL files in bytes. Once WALs exceed this size, we will start forcing the flush of column families related, 0 means no limit.
rocksdb.db_write_buffer_size0Total size of write buffers in bytes across all column families, 0 means no limit.
rocksdb.delete_obsolete_files_period21600The periodicity in seconds when obsolete files get deleted, 0 means always do full purge.
rocksdb.hard_pending_compaction_bytes_limit274877906944The hard limit to impose on pending compaction in bytes.
rocksdb.level0_file_num_compaction_trigger2Number of files to trigger level-0 compaction.
rocksdb.level0_slowdown_writes_trigger20Soft limit on number of level-0 files for slowing down writes.
rocksdb.level0_stop_writes_trigger36Hard limit on number of level-0 files for stopping writes.
rocksdb.soft_pending_compaction_bytes_limit68719476736The soft limit to impose on pending compaction in bytes.

HBase 后端配置项

config optiondefault valuedescrition
backendMust be set to hbase.
serializerMust be set to hbase.
hbase.hostslocalhostThe hostnames or ip addresses of HBase zookeeper, separated with commas.
hbase.port2181The port address of HBase zookeeper.
hbase.threads_max64The max threads num of hbase connections.
hbase.znode_parent/hbaseThe znode parent path of HBase zookeeper.
hbase.zk_retry3The recovery retry times of HBase zookeeper.
hbase.aggregation_timeout43200The timeout in seconds of waiting for aggregation.
hbase.kerberos_enablefalseIs Kerberos authentication enabled for HBase.
hbase.kerberos_keytabThe HBase’s key tab file for kerberos authentication.
hbase.kerberos_principalThe HBase’s principal for kerberos authentication.
hbase.krb5_confetc/krb5.confKerberos configuration file, including KDC IP, default realm, etc.
hbase.hbase_site/etc/hbase/conf/hbase-site.xmlThe HBase’s configuration file
hbase.enable_partitiontrueIs pre-split partitions enabled for HBase.
hbase.vertex_partitions10The number of partitions of the HBase vertex table.
hbase.edge_partitions30The number of partitions of the HBase edge table.

MySQL & PostgreSQL 后端配置项

config optiondefault valuedescrition
backendMust be set to mysql.
serializerMust be set to mysql.
jdbc.drivercom.mysql.jdbc.DriverThe JDBC driver class to connect database.
jdbc.urljdbc:mysql://127.0.0.1:3306The url of database in JDBC format.
jdbc.usernamerootThe username to login database.
jdbc.password******The password corresponding to jdbc.username.
jdbc.ssl_modefalseThe SSL mode of connections with database.
jdbc.reconnect_interval3The interval(seconds) between reconnections when the database connection fails.
jdbc.reconnect_max_times3The reconnect times when the database connection fails.
jdbc.storage_engineInnoDBThe storage engine of backend store database, like InnoDB/MyISAM/RocksDB for MySQL.
jdbc.postgresql.connect_databasetemplate1The database used to connect when init store, drop store or check store exist.

PostgreSQL 后端配置项

config optiondefault valuedescrition
backendMust be set to postgresql.
serializerMust be set to postgresql.

其它与 MySQL 后端一致。

PostgreSQL 后端的 driver 和 url 应该设置为:

4.3 - HugeGraph 内置用户权限与扩展权限配置及使用

概述

HugeGraph 为了方便不同用户场景下的鉴权使用,目前内置了两套权限模式:

  1. 简单的ConfigAuthenticator模式,通过本地配置文件存储用户名和密码 (仅支持单 GraphServer)
  2. 完备的StandardAuthenticator模式,支持多用户认证、以及细粒度的权限访问控制,采用基于 “用户-用户组-操作-资源” 的 4 层设计,灵活控制用户角色与权限 (支持多 GraphServer)

其中 StandardAuthenticator 模式的几个核心设计:

举例说明:

// 场景:某用户只有北京地区的数据读取权限
+

4.2 - HugeGraph 配置项

Gremlin Server 配置项

对应配置文件gremlin-server.yaml

config optiondefault valuedescription
host127.0.0.1The host or ip of Gremlin Server.
port8182The listening port of Gremlin Server.
graphshugegraph: conf/hugegraph.propertiesThe map of graphs with name and config file path.
scriptEvaluationTimeout30000The timeout for gremlin script execution(millisecond).
channelizerorg.apache.tinkerpop.gremlin.server.channel.HttpChannelizerIndicates the protocol which the Gremlin Server provides service.
authenticationauthenticator: com.baidu.hugegraph.auth.StandardAuthenticator, config: {tokens: conf/rest-server.properties}The authenticator and config(contains tokens path) of authentication mechanism.

Rest Server & API 配置项

对应配置文件rest-server.properties

config optiondefault valuedescription
graphs[hugegraph:conf/hugegraph.properties]The map of graphs’ name and config file.
server.idserver-1The id of rest server, used for license verification.
server.rolemasterThe role of nodes in the cluster, available types are [master, worker, computer]
restserver.urlhttp://127.0.0.1:8080The url for listening of rest server.
ssl.keystore_fileserver.keystoreThe path of server keystore file used when https protocol is enabled.
ssl.keystore_passwordThe password of the path of the server keystore file used when the https protocol is enabled.
restserver.max_worker_threads2 * CPUsThe maximum worker threads of rest server.
restserver.min_free_memory64The minimum free memory(MB) of rest server, requests will be rejected when the available memory of system is lower than this value.
restserver.request_timeout30The time in seconds within which a request must complete, -1 means no timeout.
restserver.connection_idle_timeout30The time in seconds to keep an inactive connection alive, -1 means no timeout.
restserver.connection_max_requests256The max number of HTTP requests allowed to be processed on one keep-alive connection, -1 means unlimited.
gremlinserver.urlhttp://127.0.0.1:8182The url of gremlin server.
gremlinserver.max_route8The max route number for gremlin server.
gremlinserver.timeout30The timeout in seconds of waiting for gremlin server.
batch.max_edges_per_batch500The maximum number of edges submitted per batch.
batch.max_vertices_per_batch500The maximum number of vertices submitted per batch.
batch.max_write_ratio50The maximum thread ratio for batch writing, only take effect if the batch.max_write_threads is 0.
batch.max_write_threads0The maximum threads for batch writing, if the value is 0, the actual value will be set to batch.max_write_ratio * restserver.max_worker_threads.
auth.authenticatorThe class path of authenticator implementation. e.g., com.baidu.hugegraph.auth.StandardAuthenticator, or com.baidu.hugegraph.auth.ConfigAuthenticator.
auth.admin_token162f7848-0b6d-4faf-b557-3a0797869c55Token for administrator operations, only for com.baidu.hugegraph.auth.ConfigAuthenticator.
auth.graph_storehugegraphThe name of graph used to store authentication information, like users, only for com.baidu.hugegraph.auth.StandardAuthenticator.
auth.user_tokens[hugegraph:9fd95c9c-711b-415b-b85f-d4df46ba5c31]The map of user tokens with name and password, only for com.baidu.hugegraph.auth.ConfigAuthenticator.
auth.audit_log_rate1000.0The max rate of audit log output per user, default value is 1000 records per second.
auth.cache_capacity10240The max cache capacity of each auth cache item.
auth.cache_expire600The expiration time in seconds of vertex cache.
auth.remote_urlIf the address is empty, it provide auth service, otherwise it is auth client and also provide auth service through rpc forwarding. The remote url can be set to multiple addresses, which are concat by ‘,’.
auth.token_expire86400The expiration time in seconds after token created
auth.token_secretFXQXbJtbCLxODc6tGci732pkH1cyf8QgSecret key of HS256 algorithm.
exception.allow_tracefalseWhether to allow exception trace stack.

基本配置项

基本配置项及后端配置项对应配置文件:{graph-name}.properties,如hugegraph.properties

config optiondefault valuedescription
gremlin.graphcom.baidu.hugegraph.HugeFactoryGremlin entrance to create graph.
backendrocksdbThe data store type, available values are [memory, rocksdb, cassandra, scylladb, hbase, mysql].
serializerbinaryThe serializer for backend store, available values are [text, binary, cassandra, hbase, mysql].
storehugegraphThe database name like Cassandra Keyspace.
store.connection_detect_interval600The interval in seconds for detecting connections, if the idle time of a connection exceeds this value, detect it and reconnect if needed before using, value 0 means detecting every time.
store.graphgThe graph table name, which store vertex, edge and property.
store.schemamThe schema table name, which store meta data.
store.systemsThe system table name, which store system data.
schema.illegal_name_regex.\s+$|~.The regex specified the illegal format for schema name.
schema.cache_capacity10000The max cache size(items) of schema cache.
vertex.cache_typel2The type of vertex cache, allowed values are [l1, l2].
vertex.cache_capacity10000000The max cache size(items) of vertex cache.
vertex.cache_expire600The expire time in seconds of vertex cache.
vertex.check_customized_id_existfalseWhether to check the vertices exist for those using customized id strategy.
vertex.default_labelvertexThe default vertex label.
vertex.tx_capacity10000The max size(items) of vertices(uncommitted) in transaction.
vertex.check_adjacent_vertex_existfalseWhether to check the adjacent vertices of edges exist.
vertex.lazy_load_adjacent_vertextrueWhether to lazy load adjacent vertices of edges.
vertex.part_edge_commit_size5000Whether to enable the mode to commit part of edges of vertex, enabled if commit size > 0, 0 means disabled.
vertex.encode_primary_key_numbertrueWhether to encode number value of primary key in vertex id.
vertex.remove_left_index_at_overwritefalseWhether remove left index at overwrite.
edge.cache_typel2The type of edge cache, allowed values are [l1, l2].
edge.cache_capacity1000000The max cache size(items) of edge cache.
edge.cache_expire600The expiration time in seconds of edge cache.
edge.tx_capacity10000The max size(items) of edges(uncommitted) in transaction.
query.page_size500The size of each page when querying by paging.
query.batch_size1000The size of each batch when querying by batch.
query.ignore_invalid_datatrueWhether to ignore invalid data of vertex or edge.
query.index_intersect_threshold1000The maximum number of intermediate results to intersect indexes when querying by multiple single index properties.
query.ramtable_edges_capacity20000000The maximum number of edges in ramtable, include OUT and IN edges.
query.ramtable_enablefalseWhether to enable ramtable for query of adjacent edges.
query.ramtable_vertices_capacity10000000The maximum number of vertices in ramtable, generally the largest vertex id is used as capacity.
query.optimize_aggregate_by_indexfalseWhether to optimize aggregate query(like count) by index.
oltp.concurrent_depth10The min depth to enable concurrent oltp algorithm.
oltp.concurrent_threads10Thread number to concurrently execute oltp algorithm.
oltp.collection_typeECThe implementation type of collections used in oltp algorithm.
rate_limit.read0The max rate(times/s) to execute query of vertices/edges.
rate_limit.write0The max rate(items/s) to add/update/delete vertices/edges.
task.wait_timeout10Timeout in seconds for waiting for the task to complete,such as when truncating or clearing the backend.
task.input_size_limit16777216The job input size limit in bytes.
task.result_size_limit16777216The job result size limit in bytes.
task.sync_deletionfalseWhether to delete schema or expired data synchronously.
task.ttl_delete_batch1The batch size used to delete expired data.
computer.config/conf/computer.yamlThe config file path of computer job.
search.text_analyzerikanalyzerChoose a text analyzer for searching the vertex/edge properties, available type are [word, ansj, hanlp, smartcn, jieba, jcseg, mmseg4j, ikanalyzer].
search.text_analyzer_modesmartSpecify the mode for the text analyzer, the available mode of analyzer are {word: [MaximumMatching, ReverseMaximumMatching, MinimumMatching, ReverseMinimumMatching, BidirectionalMaximumMatching, BidirectionalMinimumMatching, BidirectionalMaximumMinimumMatching, FullSegmentation, MinimalWordCount, MaxNgramScore, PureEnglish], ansj: [BaseAnalysis, IndexAnalysis, ToAnalysis, NlpAnalysis], hanlp: [standard, nlp, index, nShort, shortest, speed], smartcn: [], jieba: [SEARCH, INDEX], jcseg: [Simple, Complex], mmseg4j: [Simple, Complex, MaxWord], ikanalyzer: [smart, max_word]}.
snowflake.datecenter_id0The datacenter id of snowflake id generator.
snowflake.force_stringfalseWhether to force the snowflake long id to be a string.
snowflake.worker_id0The worker id of snowflake id generator.
raft.modefalseWhether the backend storage works in raft mode.
raft.safe_readfalseWhether to use linearly consistent read.
raft.use_snapshotfalseWhether to use snapshot.
raft.endpoint127.0.0.1:8281The peerid of current raft node.
raft.group_peers127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283The peers of current raft group.
raft.path./raft-logThe log path of current raft node.
raft.use_replicator_pipelinetrueWhether to use replicator line, when turned on it multiple logs can be sent in parallel, and the next log doesn’t have to wait for the ack message of the current log to be sent.
raft.election_timeout10000Timeout in milliseconds to launch a round of election.
raft.snapshot_interval3600The interval in seconds to trigger snapshot save.
raft.backend_threadscurrent CPU v-coresThe thread number used to apply task to backend.
raft.read_index_threads8The thread number used to execute reading index.
raft.apply_batch1The apply batch size to trigger disruptor event handler.
raft.queue_size16384The disruptor buffers size for jraft RaftNode, StateMachine and LogManager.
raft.queue_publish_timeout60The timeout in second when publish event into disruptor.
raft.rpc_threads80The rpc threads for jraft RPC layer.
raft.rpc_connect_timeout5000The rpc connect timeout for jraft rpc.
raft.rpc_timeout60000The rpc timeout for jraft rpc.
raft.rpc_buf_low_water_mark10485760The ChannelOutboundBuffer’s low water mark of netty, when buffer size less than this size, the method ChannelOutboundBuffer.isWritable() will return true, it means that low downstream pressure or good network.
raft.rpc_buf_high_water_mark20971520The ChannelOutboundBuffer’s high water mark of netty, only when buffer size exceed this size, the method ChannelOutboundBuffer.isWritable() will return false, it means that the downstream pressure is too great to process the request or network is very congestion, upstream needs to limit rate at this time.
raft.read_strategyReadOnlyLeaseBasedThe linearizability of read strategy.

RPC server 配置

config optiondefault valuedescription
rpc.client_connect_timeout20The timeout(in seconds) of rpc client connect to rpc server.
rpc.client_load_balancerconsistentHashThe rpc client uses a load-balancing algorithm to access multiple rpc servers in one cluster. Default value is ‘consistentHash’, means forwarding by request parameters.
rpc.client_read_timeout40The timeout(in seconds) of rpc client read from rpc server.
rpc.client_reconnect_period10The period(in seconds) of rpc client reconnect to rpc server.
rpc.client_retries3Failed retry number of rpc client calls to rpc server.
rpc.config_order999Sofa rpc configuration file loading order, the larger the more later loading.
rpc.logger_implcom.alipay.sofa.rpc.log.SLF4JLoggerImplSofa rpc log implementation class.
rpc.protocolboltRpc communication protocol, client and server need to be specified the same value.
rpc.remote_urlThe remote urls of rpc peers, it can be set to multiple addresses, which are concat by ‘,’, empty value means not enabled.
rpc.server_adaptive_portfalseWhether the bound port is adaptive, if it’s enabled, when the port is in use, automatically +1 to detect the next available port. Note that this process is not atomic, so there may still be port conflicts.
rpc.server_hostThe hosts/ips bound by rpc server to provide services, empty value means not enabled.
rpc.server_port8090The port bound by rpc server to provide services.
rpc.server_timeout30The timeout(in seconds) of rpc server execution.

Cassandra 后端配置项

config optiondefault valuedescription
backendMust be set to cassandra.
serializerMust be set to cassandra.
cassandra.hostlocalhostThe seeds hostname or ip address of cassandra cluster.
cassandra.port9042The seeds port address of cassandra cluster.
cassandra.connect_timeout5The cassandra driver connect server timeout(seconds).
cassandra.read_timeout20The cassandra driver read from server timeout(seconds).
cassandra.keyspace.strategySimpleStrategyThe replication strategy of keyspace, valid value is SimpleStrategy or NetworkTopologyStrategy.
cassandra.keyspace.replication[3]The keyspace replication factor of SimpleStrategy, like ‘[3]’.Or replicas in each datacenter of NetworkTopologyStrategy, like ‘[dc1:2,dc2:1]’.
cassandra.usernameThe username to use to login to cassandra cluster.
cassandra.passwordThe password corresponding to cassandra.username.
cassandra.compression_typenoneThe compression algorithm of cassandra transport: none/snappy/lz4.
cassandra.jmx_port=71997199The port of JMX API service for cassandra.
cassandra.aggregation_timeout43200The timeout in seconds of waiting for aggregation.

ScyllaDB 后端配置项

config optiondefault valuedescription
backendMust be set to scylladb.
serializerMust be set to scylladb.

其它与 Cassandra 后端一致。

RocksDB 后端配置项

config optiondefault valuedescription
backendMust be set to rocksdb.
serializerMust be set to binary.
rocksdb.data_disks[]The optimized disks for storing data of RocksDB. The format of each element: STORE/TABLE: /path/disk.Allowed keys are [g/vertex, g/edge_out, g/edge_in, g/vertex_label_index, g/edge_label_index, g/range_int_index, g/range_float_index, g/range_long_index, g/range_double_index, g/secondary_index, g/search_index, g/shard_index, g/unique_index, g/olap]
rocksdb.data_pathrocksdb-dataThe path for storing data of RocksDB.
rocksdb.wal_pathrocksdb-dataThe path for storing WAL of RocksDB.
rocksdb.allow_mmap_readsfalseAllow the OS to mmap file for reading sst tables.
rocksdb.allow_mmap_writesfalseAllow the OS to mmap file for writing.
rocksdb.block_cache_capacity8388608The amount of block cache in bytes that will be used by RocksDB, 0 means no block cache.
rocksdb.bloom_filter_bits_per_key-1The bits per key in bloom filter, a good value is 10, which yields a filter with ~ 1% false positive rate, -1 means no bloom filter.
rocksdb.bloom_filter_block_based_modefalseUse block based filter rather than full filter.
rocksdb.bloom_filter_whole_key_filteringtrueTrue if place whole keys in the bloom filter, else place the prefix of keys.
rocksdb.bottommost_compressionNO_COMPRESSIONThe compression algorithm for the bottommost level of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
rocksdb.bulkload_modefalseSwitch to the mode to bulk load data into RocksDB.
rocksdb.cache_index_and_filter_blocksfalseIndicating if we’d put index/filter blocks to the block cache.
rocksdb.compaction_styleLEVELSet compaction style for RocksDB: LEVEL/UNIVERSAL/FIFO.
rocksdb.compressionSNAPPY_COMPRESSIONThe compression algorithm for compressing blocks of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
rocksdb.compression_per_level[NO_COMPRESSION, NO_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION]The compression algorithms for different levels of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
rocksdb.delayed_write_rate16777216The rate limit in bytes/s of user write requests when need to slow down if the compaction gets behind.
rocksdb.log_levelINFOThe info log level of RocksDB.
rocksdb.max_background_jobs8Maximum number of concurrent background jobs, including flushes and compactions.
rocksdb.level_compaction_dynamic_level_bytesfalseWhether to enable level_compaction_dynamic_level_bytes, if it’s enabled we give max_bytes_for_level_multiplier a priority against max_bytes_for_level_base, the bytes of base level is dynamic for a more predictable LSM tree, it is useful to limit worse case space amplification. Turning this feature on/off for an existing DB can cause unexpected LSM tree structure so it’s not recommended.
rocksdb.max_bytes_for_level_base536870912The upper-bound of the total size of level-1 files in bytes.
rocksdb.max_bytes_for_level_multiplier10.0The ratio between the total size of level (L+1) files and the total size of level L files for all L.
rocksdb.max_open_files-1The maximum number of open files that can be cached by RocksDB, -1 means no limit.
rocksdb.max_subcompactions4The value represents the maximum number of threads per compaction job.
rocksdb.max_write_buffer_number6The maximum number of write buffers that are built up in memory.
rocksdb.max_write_buffer_number_to_maintain0The total maximum number of write buffers to maintain in memory.
rocksdb.min_write_buffer_number_to_merge2The minimum number of write buffers that will be merged together.
rocksdb.num_levels7Set the number of levels for this database.
rocksdb.optimize_filters_for_hitsfalseThis flag allows us to not store filters for the last level.
rocksdb.optimize_modetrueOptimize for heavy workloads and big datasets.
rocksdb.pin_l0_filter_and_index_blocks_in_cachefalseIndicating if we’d put index/filter blocks to the block cache.
rocksdb.sst_pathThe path for ingesting SST file into RocksDB.
rocksdb.target_file_size_base67108864The target file size for compaction in bytes.
rocksdb.target_file_size_multiplier1The size ratio between a level L file and a level (L+1) file.
rocksdb.use_direct_io_for_flush_and_compactionfalseEnable the OS to use direct read/writes in flush and compaction.
rocksdb.use_direct_readsfalseEnable the OS to use direct I/O for reading sst tables.
rocksdb.write_buffer_size134217728Amount of data in bytes to build up in memory.
rocksdb.max_manifest_file_size104857600The max size of manifest file in bytes.
rocksdb.skip_stats_update_on_db_openfalseWhether to skip statistics update when opening the database, setting this flag true allows us to not update statistics.
rocksdb.max_file_opening_threads16The max number of threads used to open files.
rocksdb.max_total_wal_size0Total size of WAL files in bytes. Once WALs exceed this size, we will start forcing the flush of column families related, 0 means no limit.
rocksdb.db_write_buffer_size0Total size of write buffers in bytes across all column families, 0 means no limit.
rocksdb.delete_obsolete_files_period21600The periodicity in seconds when obsolete files get deleted, 0 means always do full purge.
rocksdb.hard_pending_compaction_bytes_limit274877906944The hard limit to impose on pending compaction in bytes.
rocksdb.level0_file_num_compaction_trigger2Number of files to trigger level-0 compaction.
rocksdb.level0_slowdown_writes_trigger20Soft limit on number of level-0 files for slowing down writes.
rocksdb.level0_stop_writes_trigger36Hard limit on number of level-0 files for stopping writes.
rocksdb.soft_pending_compaction_bytes_limit68719476736The soft limit to impose on pending compaction in bytes.

HBase 后端配置项

config optiondefault valuedescription
backendMust be set to hbase.
serializerMust be set to hbase.
hbase.hostslocalhostThe hostnames or ip addresses of HBase zookeeper, separated with commas.
hbase.port2181The port address of HBase zookeeper.
hbase.threads_max64The max threads num of hbase connections.
hbase.znode_parent/hbaseThe znode parent path of HBase zookeeper.
hbase.zk_retry3The recovery retry times of HBase zookeeper.
hbase.aggregation_timeout43200The timeout in seconds of waiting for aggregation.
hbase.kerberos_enablefalseIs Kerberos authentication enabled for HBase.
hbase.kerberos_keytabThe HBase’s key tab file for kerberos authentication.
hbase.kerberos_principalThe HBase’s principal for kerberos authentication.
hbase.krb5_confetc/krb5.confKerberos configuration file, including KDC IP, default realm, etc.
hbase.hbase_site/etc/hbase/conf/hbase-site.xmlThe HBase’s configuration file
hbase.enable_partitiontrueIs pre-split partitions enabled for HBase.
hbase.vertex_partitions10The number of partitions of the HBase vertex table.
hbase.edge_partitions30The number of partitions of the HBase edge table.

MySQL & PostgreSQL 后端配置项

config optiondefault valuedescription
backendMust be set to mysql.
serializerMust be set to mysql.
jdbc.drivercom.mysql.jdbc.DriverThe JDBC driver class to connect database.
jdbc.urljdbc:mysql://127.0.0.1:3306The url of database in JDBC format.
jdbc.usernamerootThe username to login database.
jdbc.password******The password corresponding to jdbc.username.
jdbc.ssl_modefalseThe SSL mode of connections with database.
jdbc.reconnect_interval3The interval(seconds) between reconnections when the database connection fails.
jdbc.reconnect_max_times3The reconnect times when the database connection fails.
jdbc.storage_engineInnoDBThe storage engine of backend store database, like InnoDB/MyISAM/RocksDB for MySQL.
jdbc.postgresql.connect_databasetemplate1The database used to connect when init store, drop store or check store exist.

PostgreSQL 后端配置项

config optiondefault valuedescription
backendMust be set to postgresql.
serializerMust be set to postgresql.

其它与 MySQL 后端一致。

PostgreSQL 后端的 driver 和 url 应该设置为:

4.3 - HugeGraph 内置用户权限与扩展权限配置及使用

概述

HugeGraph 为了方便不同用户场景下的鉴权使用,目前内置了两套权限模式:

  1. 简单的ConfigAuthenticator模式,通过本地配置文件存储用户名和密码 (仅支持单 GraphServer)
  2. 完备的StandardAuthenticator模式,支持多用户认证、以及细粒度的权限访问控制,采用基于 “用户-用户组-操作-资源” 的 4 层设计,灵活控制用户角色与权限 (支持多 GraphServer)

其中 StandardAuthenticator 模式的几个核心设计:

举例说明:

// 场景:某用户只有北京地区的数据读取权限
 user(name=xx) -belong-> group(name=xx) -access(read)-> target(graph=graph1, resource={label: person, city: Beijing})
 

配置用户认证

HugeGraph 默认不启用用户认证功能,需通过修改配置文件来启用该功能。内置实现了StandardAuthenticatorConfigAuthenticator两种模式,StandardAuthenticator模式支持多用户认证与细粒度权限控制,ConfigAuthenticator模式支持简单的用户权限认证。此外,开发者可以自定义实现HugeAuthenticator接口来对接自身的权限系统。

用户认证方式均采用 HTTP Basic Authentication ,简单说就是在发送 HTTP 请求时在 Authentication 设置选择 Basic 然后输入对应的用户名和密码,对应 HTTP 明文如下所示 :

GET http://localhost:8080/graphs/hugegraph/schema/vertexlabels
 Authorization: Basic admin xxxx
@@ -5839,7 +5839,7 @@
 schema.getPropertyKey("name").dataType()
 schema.getPropertyKey("name").name()
 schema.getPropertyKey("name").userdata()
-

2.3 VertexLabel

2.3.1 接口及参数介绍

VertexLabel 用来定义顶点类型,描述顶点的约束信息:

VertexLabel 允许定义的约束信息包括:name、idStrategy、properties、primaryKeys和 nullableKeys,下面逐一介绍。

interfaceparammust set
vertexLabel(String name)namey
interfaceidStrategydescription
useAutomaticIdAUTOMATICgenerate id automaticly by Snowflake algorithom
useCustomizeStringIdCUSTOMIZE_STRINGpassed id by user, must be string type
useCustomizeNumberIdCUSTOMIZE_NUMBERpassed id by user, must be number type
usePrimaryKeyIdPRIMARY_KEYchoose some important prop as primary key to splice id
interfacedescription
properties(String… properties)allow to pass multi props
interfacedescription
primaryKeys(String… keys)allow to choose multi prop as primaryKeys

需要注意的是,Id 策略的选择与 primaryKeys 的设置有一些相互约束,不能随意调用,约束关系见下表:

useAutomaticIduseCustomizeStringIduseCustomizeNumberIdusePrimaryKeyId
unset primaryKeysAUTOMATICCUSTOMIZE_STRINGCUSTOMIZE_NUMBERERROR
set primaryKeysERRORERRORERRORPRIMARY_KEY

2.3 VertexLabel

2.3.1 接口及参数介绍

VertexLabel 用来定义顶点类型,描述顶点的约束信息:

VertexLabel 允许定义的约束信息包括:name、idStrategy、properties、primaryKeys和 nullableKeys,下面逐一介绍。

interfaceparammust set
vertexLabel(String name)namey
interfaceidStrategydescription
useAutomaticIdAUTOMATICgenerate id automatically by Snowflake algorithm
useCustomizeStringIdCUSTOMIZE_STRINGpassed id by user, must be string type
useCustomizeNumberIdCUSTOMIZE_NUMBERpassed id by user, must be number type
usePrimaryKeyIdPRIMARY_KEYchoose some important prop as primary key to splice id
interfacedescription
properties(String… properties)allow to pass multi props
interfacedescription
primaryKeys(String… keys)allow to choose multi prop as primaryKeys

需要注意的是,Id 策略的选择与 primaryKeys 的设置有一些相互约束,不能随意调用,约束关系见下表:

useAutomaticIduseCustomizeStringIduseCustomizeNumberIdusePrimaryKeyId
unset primaryKeysAUTOMATICCUSTOMIZE_STRINGCUSTOMIZE_NUMBERERROR
set primaryKeysERRORERRORERRORPRIMARY_KEY
interfacedescription
nullableKeys(String… properties)allow to pass multi props

注意:primaryKeys 和 nullableKeys 不能有交集,因为一个属性不能既作为主属性,又是可空的。

interfacedescription
enableLabelIndex(boolean enable)Whether to create a label index
interfacedescription
userdata(String key, Object value)The same key, the latter will cover the former
2.3.2 创建 VertexLabel
// 使用 Automatic 的 Id 策略
 schema.vertexLabel("person").properties("name", "age").ifNotExist().create();
@@ -5881,7 +5881,7 @@
 schema.getEdgeLabel("knows").properties()
 schema.getEdgeLabel("knows").nullableKeys()
 schema.getEdgeLabel("knows").userdata()
-

2.5 IndexLabel

2.5.1 接口及参数介绍

IndexLabel 用来定义索引类型,描述索引的约束信息,主要是为了方便查询。

IndexLabel 允许定义的约束信息包括:name、baseType、baseValue、indexFeilds、indexType,下面逐一介绍。

interfaceparammust set
indexLabel(String name)namey
interfaceparamdescription
onV(String baseValue)baseValuebuild index for VertexLabel: ‘baseValue’
onE(String baseValue)baseValuebuild index for EdgeLabel: ‘baseValue’
interfaceparamdescription
by(String… fields)filesallow to build index for multi fields for secondary index

其他问题可以在对应项目的 issue 区搜索,例如 Server-Issues / Loader Issues


Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
diff --git a/docs/guides/index.xml b/docs/guides/index.xml index 7e75b24cb..4407b0ca1 100644 --- a/docs/guides/index.xml +++ b/docs/guides/index.xml @@ -601,7 +601,7 @@ HugeGraph目前采用EdgeCut的分区方案。</p> </li> <li> <p>服务启动成功后,使用<code>curl</code>查询所有顶点时返回乱码</p> -<p>服务端返回的批量顶点/边是压缩(gzip)过的,可以使用管道重定向至 gunzip 进行解压(<code>curl http://example | gunzip</code>),也可以用<code>Firefox</code>的<code>postman</code>或者<code>Chrome</code>浏览器的<code>restlet</code>插件发请求,会自动解压缩响应数据。</p> +<p>服务端返回的批量顶点/边是压缩(gzip)过的,可以使用管道重定向至 <code>gunzip</code> 进行解压(<code>curl http://example | gunzip</code>),也可以用<code>Firefox</code>的<code>postman</code>或者<code>Chrome</code>浏览器的<code>restlet</code>插件发请求,会自动解压缩响应数据。</p> </li> <li> <p>使用顶点Id通过<code>RESTful API</code>查询顶点时返回空,但是顶点确实是存在的</p> diff --git a/docs/index.xml b/docs/index.xml index d6ce42a8f..a1bd6cfc9 100644 --- a/docs/index.xml +++ b/docs/index.xml @@ -8674,7 +8674,7 @@ batch_size_fail_threshold_in_kb: 1000 </li> <li> <p>服务启动成功后,使用<code>curl</code>查询所有顶点时返回乱码</p> -<p>服务端返回的批量顶点/边是压缩(gzip)过的,可以使用管道重定向至 gunzip 进行解压(<code>curl http://example | gunzip</code>),也可以用<code>Firefox</code>的<code>postman</code>或者<code>Chrome</code>浏览器的<code>restlet</code>插件发请求,会自动解压缩响应数据。</p> +<p>服务端返回的批量顶点/边是压缩(gzip)过的,可以使用管道重定向至 <code>gunzip</code> 进行解压(<code>curl http://example | gunzip</code>),也可以用<code>Firefox</code>的<code>postman</code>或者<code>Chrome</code>浏览器的<code>restlet</code>插件发请求,会自动解压缩响应数据。</p> </li> <li> <p>使用顶点Id通过<code>RESTful API</code>查询顶点时返回空,但是顶点确实是存在的</p> diff --git a/en/index.html b/en/index.html index 017c4d50b..2b6417980 100644 --- a/en/index.html +++ b/en/index.html @@ -1 +1 @@ -/ \ No newline at end of file +/ \ No newline at end of file diff --git a/en/sitemap.xml b/en/sitemap.xml index a727789cb..6f5a590a7 100644 --- a/en/sitemap.xml +++ b/en/sitemap.xml @@ -1 +1 @@ -/docs/guides/architectural/2022-04-17T11:36:55+08:00/docs/config/config-guide/2022-04-17T11:36:55+08:00/docs/language/hugegraph-gremlin/2022-09-15T12:59:59+08:00/docs/contribution-guidelines/contribute/2022-09-15T12:59:59+08:00/docs/performance/hugegraph-benchmark-0.5.6/2022-09-15T12:59:59+08:00/docs/quickstart/hugegraph-server/2022-09-15T12:59:59+08:00/docs/introduction/readme/2022-09-15T12:59:59+08:00/docs/changelog/hugegraph-0.12.0-release-notes/2022-04-17T11:36:55+08:00/docs/clients/restful-api/2022-04-17T11:36:55+08:00/docs/clients/restful-api/schema/2022-04-17T11:36:55+08:00/docs/performance/api-preformance/hugegraph-api-0.5.6-rocksdb/2022-04-17T11:36:55+08:00/docs/config/config-option/2022-09-15T12:59:59+08:00/docs/guides/desgin-concept/2022-04-17T11:36:55+08:00/docs/download/download/2022-09-15T12:59:59+08:00/docs/language/hugegraph-example/2022-09-15T12:59:59+08:00/docs/clients/hugegraph-client/2022-09-15T12:59:59+08:00/docs/performance/api-preformance/2022-04-17T11:36:55+08:00/docs/quickstart/hugegraph-loader/2022-09-15T12:59:59+08:00/docs/clients/restful-api/propertykey/2022-05-12T21:24:05+08:00/docs/contribution-guidelines/subscribe/2022-09-15T12:59:59+08:00/docs/performance/api-preformance/hugegraph-api-0.5.6-cassandra/2022-04-17T11:36:55+08:00/docs/config/config-authentication/2022-04-17T11:36:55+08:00/docs/clients/gremlin-console/2022-05-25T21:16:41+08:00/docs/guides/custom-plugin/2022-09-15T12:59:59+08:00/docs/performance/hugegraph-loader-performance/2022-04-17T11:36:55+08:00/docs/quickstart/hugegraph-tools/2022-09-15T12:59:59+08:00/docs/quickstart/2022-04-17T11:36:55+08:00/docs/performance/api-preformance/hugegraph-api-0.4.4/2022-04-17T11:36:55+08:00/docs/clients/restful-api/vertexlabel/2022-04-17T11:36:55+08:00/docs/guides/backup-restore/2022-04-17T11:36:55+08:00/docs/config/2022-04-17T11:36:55+08:00/docs/config/config-https/2022-04-17T11:36:55+08:00/docs/clients/restful-api/edgelabel/2022-04-17T11:36:55+08:00/docs/performance/api-preformance/hugegraph-api-0.2/2022-04-17T11:36:55+08:00/docs/quickstart/hugegraph-hubble/2022-09-15T12:59:59+08:00/docs/clients/2022-04-17T11:36:55+08:00/docs/guides/faq/2022-09-15T12:59:59+08:00/docs/clients/restful-api/indexlabel/2022-04-17T11:36:55+08:00/docs/quickstart/hugegraph-client/2022-09-15T12:59:59+08:00/docs/guides/2022-04-17T11:36:55+08:00/docs/clients/restful-api/rebuild/2022-05-09T18:43:53+08:00/docs/language/2022-04-17T11:36:55+08:00/docs/clients/restful-api/vertex/2022-04-17T11:36:55+08:00/docs/clients/restful-api/edge/2022-04-17T11:36:55+08:00/docs/performance/2022-04-17T11:36:55+08:00/docs/contribution-guidelines/2022-04-28T21:26:41+08:00/docs/clients/restful-api/traverser/2022-04-17T11:36:55+08:00/docs/changelog/2022-04-28T21:26:41+08:00/docs/clients/restful-api/rank/2022-09-15T12:59:59+08:00/docs/clients/restful-api/variable/2022-04-17T11:36:55+08:00/docs/clients/restful-api/graphs/2022-05-27T09:27:37+08:00/docs/clients/restful-api/task/2022-09-15T12:59:59+08:00/docs/clients/restful-api/gremlin/2022-04-17T11:36:55+08:00/docs/clients/restful-api/auth/2022-04-17T11:36:55+08:00/docs/clients/restful-api/other/2022-04-17T11:36:55+08:00/docs/2022-04-21T15:42:39+08:00/blog/news/2022-03-21T18:55:33+08:00/blog/releases/2022-03-21T18:55:33+08:00/blog/2018/10/06/easy-documentation-with-docsy/2022-03-21T18:55:33+08:00/blog/2018/10/06/the-second-blog-post/2022-03-21T18:55:33+08:00/blog/2018/01/04/another-great-release/2022-03-21T18:55:33+08:00/docs/cla/2022-03-21T19:51:14+08:00/docs/performance/hugegraph-benchmark-0.4.4/2022-09-15T12:59:59+08:00/docs/summary/2022-04-17T11:36:55+08:00/about/2022-04-21T15:42:39+08:00/blog/2022-03-21T18:55:33+08:00/categories//community/2022-03-21T18:55:33+08:00/2022-05-11T21:17:34+08:00/search/2022-03-21T18:55:33+08:00/tags/ \ No newline at end of file +/docs/guides/architectural/2022-04-17T11:36:55+08:00/docs/config/config-guide/2022-04-17T11:36:55+08:00/docs/language/hugegraph-gremlin/2022-09-15T12:59:59+08:00/docs/contribution-guidelines/contribute/2022-09-15T12:59:59+08:00/docs/performance/hugegraph-benchmark-0.5.6/2022-09-15T12:59:59+08:00/docs/quickstart/hugegraph-server/2022-09-15T12:59:59+08:00/docs/introduction/readme/2022-09-15T12:59:59+08:00/docs/changelog/hugegraph-0.12.0-release-notes/2022-04-17T11:36:55+08:00/docs/clients/restful-api/2022-04-17T11:36:55+08:00/docs/clients/restful-api/schema/2022-04-17T11:36:55+08:00/docs/performance/api-preformance/hugegraph-api-0.5.6-rocksdb/2022-04-17T11:36:55+08:00/docs/config/config-option/2022-09-15T12:59:59+08:00/docs/guides/desgin-concept/2022-04-17T11:36:55+08:00/docs/download/download/2022-09-15T12:59:59+08:00/docs/language/hugegraph-example/2022-09-15T12:59:59+08:00/docs/clients/hugegraph-client/2022-09-15T12:59:59+08:00/docs/performance/api-preformance/2022-04-17T11:36:55+08:00/docs/quickstart/hugegraph-loader/2022-09-15T12:59:59+08:00/docs/clients/restful-api/propertykey/2022-05-12T21:24:05+08:00/docs/contribution-guidelines/subscribe/2022-09-15T12:59:59+08:00/docs/performance/api-preformance/hugegraph-api-0.5.6-cassandra/2022-04-17T11:36:55+08:00/docs/config/config-authentication/2022-04-17T11:36:55+08:00/docs/clients/gremlin-console/2022-05-25T21:16:41+08:00/docs/guides/custom-plugin/2022-09-15T12:59:59+08:00/docs/performance/hugegraph-loader-performance/2022-04-17T11:36:55+08:00/docs/quickstart/hugegraph-tools/2022-09-15T12:59:59+08:00/docs/quickstart/2022-04-17T11:36:55+08:00/docs/performance/api-preformance/hugegraph-api-0.4.4/2022-04-17T11:36:55+08:00/docs/clients/restful-api/vertexlabel/2022-04-17T11:36:55+08:00/docs/guides/backup-restore/2022-04-17T11:36:55+08:00/docs/config/2022-04-17T11:36:55+08:00/docs/config/config-https/2022-04-17T11:36:55+08:00/docs/clients/restful-api/edgelabel/2022-04-17T11:36:55+08:00/docs/performance/api-preformance/hugegraph-api-0.2/2022-04-17T11:36:55+08:00/docs/quickstart/hugegraph-hubble/2022-09-15T12:59:59+08:00/docs/clients/2022-04-17T11:36:55+08:00/docs/guides/faq/2022-09-15T15:16:23+08:00/docs/clients/restful-api/indexlabel/2022-04-17T11:36:55+08:00/docs/quickstart/hugegraph-client/2022-09-15T12:59:59+08:00/docs/guides/2022-04-17T11:36:55+08:00/docs/clients/restful-api/rebuild/2022-05-09T18:43:53+08:00/docs/language/2022-04-17T11:36:55+08:00/docs/clients/restful-api/vertex/2022-09-15T15:16:23+08:00/docs/clients/restful-api/edge/2022-09-15T15:16:23+08:00/docs/performance/2022-04-17T11:36:55+08:00/docs/contribution-guidelines/2022-04-28T21:26:41+08:00/docs/clients/restful-api/traverser/2022-04-17T11:36:55+08:00/docs/changelog/2022-04-28T21:26:41+08:00/docs/clients/restful-api/rank/2022-09-15T12:59:59+08:00/docs/clients/restful-api/variable/2022-04-17T11:36:55+08:00/docs/clients/restful-api/graphs/2022-05-27T09:27:37+08:00/docs/clients/restful-api/task/2022-09-15T12:59:59+08:00/docs/clients/restful-api/gremlin/2022-04-17T11:36:55+08:00/docs/clients/restful-api/auth/2022-04-17T11:36:55+08:00/docs/clients/restful-api/other/2022-04-17T11:36:55+08:00/docs/2022-04-21T15:42:39+08:00/blog/news/2022-03-21T18:55:33+08:00/blog/releases/2022-03-21T18:55:33+08:00/blog/2018/10/06/easy-documentation-with-docsy/2022-03-21T18:55:33+08:00/blog/2018/10/06/the-second-blog-post/2022-03-21T18:55:33+08:00/blog/2018/01/04/another-great-release/2022-03-21T18:55:33+08:00/docs/cla/2022-03-21T19:51:14+08:00/docs/performance/hugegraph-benchmark-0.4.4/2022-09-15T12:59:59+08:00/docs/summary/2022-04-17T11:36:55+08:00/about/2022-04-21T15:42:39+08:00/blog/2022-03-21T18:55:33+08:00/categories//community/2022-03-21T18:55:33+08:00/2022-05-11T21:17:34+08:00/search/2022-03-21T18:55:33+08:00/tags/ \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml index 155bf89c0..f4f809d40 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -1 +1 @@ -/cn/sitemap.xml2022-08-01T15:26:22+08:00/en/sitemap.xml2022-09-15T12:59:59+08:00 \ No newline at end of file +/en/sitemap.xml2022-09-15T15:16:23+08:00/cn/sitemap.xml2022-09-15T15:16:23+08:00 \ No newline at end of file