可以使用 docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph 在启动的时候预加载一个内置的样例图。可以通过 RESTful API 进行验证。具体步骤可以参考 5.1.1
另外,如果我们希望能够在一个文件中管理除了 server 之外的其他 Hugegraph 相关的实例,我们也可以使用 docker-compose完成部署,使用命令 docker-compose up -d,(当然只配置 server 也是可以的)以下是一个样例的 docker-compose.yml:
# use the latest version, here is 1.0.0 for examplewget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz
tar zxf *hugegraph*.tar.gz
@@ -47,23 +48,25 @@
# enter the tool's packagecd *hugegraph*/*tool*
$ ./bin/start-hugegraph.sh
Starting HugeGraphServer...
-Connecting to HugeGraphServer (http://127.0.0.1:18080/graphs)...OK
+Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)...OK
Started [pid 21614]
The class used to transfer algorithms’ parameters before algorithm been run.
algorithm.result_class
org.apache.hugegraph.computer.core.config.Null
The class of vertex’s value, the instance is used to store computation result for the vertex.
allocator.max_vertices_per_thread
10000
Maximum number of vertices per thread processed in each memory allocator
bsp.etcd_endpoints
http://localhost:2379
The end points to access etcd.
bsp.log_interval
30000
The log interval(in ms) to print the log while waiting bsp event.
bsp.max_super_step
10
The max super step of the algorithm.
bsp.register_timeout
300000
The max timeout to wait for master and works to register.
bsp.wait_master_timeout
86400000
The max timeout(in ms) to wait for master bsp event.
bsp.wait_workers_timeout
86400000
The max timeout to wait for workers bsp event.
hgkv.max_data_block_size
65536
The max byte size of hgkv-file data block.
hgkv.max_file_size
2147483648
The max number of bytes in each hgkv-file.
hgkv.max_merge_files
10
The max number of files to merge at one time.
hgkv.temp_file_dir
/tmp/hgkv
This folder is used to store temporary files, temporary files will be generated during the file merging process.
hugegraph.name
hugegraph
The graph name to load data and write results back.
hugegraph.url
http://127.0.0.1:8080
The hugegraph url to load data and write results back.
input.edge_direction
OUT
The data of the edge in which direction is loaded, when the value is BOTH, the edges in both OUT and IN direction will be loaded.
input.edge_freq
MULTIPLE
The frequency of edges can exist between a pair of vertices, allowed values: [SINGLE, SINGLE_PER_LABEL, MULTIPLE]. SINGLE means that only one edge can exist between a pair of vertices, use sourceId + targetId to identify it; SINGLE_PER_LABEL means that each edge label can exist one edge between a pair of vertices, use sourceId + edgelabel + targetId to identify it; MULTIPLE means that many edge can exist between a pair of vertices, use sourceId + edgelabel + sortValues + targetId to identify it.
The class to create input-filter object, input-filter is used to Filter vertex edges according to user needs.
input.loader_schema_path
The schema path of loader input, only takes effect when the input.source_type=loader is enabled
input.loader_struct_path
The struct path of loader input, only takes effect when the input.source_type=loader is enabled
input.max_edges_in_one_vertex
200
The maximum number of adjacent edges allowed to be attached to a vertex, the adjacent edges will be stored and transferred together as a batch unit.
input.source_type
hugegraph-server
The source type to load input data, allowed values: [‘hugegraph-server’, ‘hugegraph-loader’], the ‘hugegraph-loader’ means use hugegraph-loader load data from HDFS or file, if use ‘hugegraph-loader’ load data then please config ‘input.loader_struct_path’ and ‘input.loader_schema_path’.
input.split_fetch_timeout
300
The timeout in seconds to fetch input splits
input.split_max_splits
10000000
The maximum number of input splits
input.split_page_size
500
The page size for streamed load input split data
input.split_size
1048576
The input split size in bytes
job.id
local_0001
The job id on Yarn cluster or K8s cluster.
job.partitions_count
1
The partitions count for computing one graph algorithm job.
job.partitions_thread_nums
4
The number of threads for partition parallel compute.
job.workers_count
1
The workers count for computing one graph algorithm job.
The class to output the computation result of each vertex. Be called after iteration computation.
output.result_name
value
The value is assigned dynamically by #name() of instance created by WORKER_COMPUTATION_CLASS.
output.result_write_type
OLAP_COMMON
The result write-type to output to hugegraph, allowed values are: [OLAP_COMMON, OLAP_SECONDARY, OLAP_RANGE].
output.retry_interval
10
The retry interval when output failed
output.retry_times
3
The retry times when output failed
output.single_threads
1
The threads number used to single output
output.thread_pool_shutdown_timeout
60
The timeout seconds of output threads pool shutdown
output.with_adjacent_edges
false
Output the adjacent edges of the vertex or not
output.with_edge_properties
false
Output the properties of the edge or not
output.with_vertex_properties
false
Output the properties of the vertex or not
sort.thread_nums
4
The number of threads performing internal sorting.
transport.client_connect_timeout
3000
The timeout(in ms) of client connect to server.
transport.client_threads
4
The number of transport threads for client.
transport.close_timeout
10000
The timeout(in ms) of close server or close client.
transport.finish_session_timeout
0
The timeout(in ms) to finish session, 0 means using (transport.sync_request_timeout * transport.max_pending_requests).
transport.heartbeat_interval
20000
The minimum interval(in ms) between heartbeats on client side.
transport.io_mode
AUTO
The network IO Mode, either ‘NIO’, ‘EPOLL’, ‘AUTO’, the ‘AUTO’ means selecting the property mode automatically.
transport.max_pending_requests
8
The max number of client unreceived ack, it will trigger the sending unavailable if the number of unreceived ack >= max_pending_requests.
transport.max_syn_backlog
511
The capacity of SYN queue on server side, 0 means using system default value.
transport.max_timeout_heartbeat_count
120
The maximum times of timeout heartbeat on client side, if the number of timeouts waiting for heartbeat response continuously > max_heartbeat_timeouts the channel will be closed from client side.
transport.min_ack_interval
200
The minimum interval(in ms) of server reply ack.
transport.min_pending_requests
6
The minimum number of client unreceived ack, it will trigger the sending available if the number of unreceived ack < min_pending_requests.
transport.network_retries
3
The number of retry attempts for network communication,if network unstable.
The transport provider, currently only supports Netty.
transport.receive_buffer_size
0
The size of socket receive-buffer in bytes, 0 means using system default value.
transport.recv_file_mode
true
Whether enable receive buffer-file mode, it will receive buffer write file from socket by zero-copy if enable.
transport.send_buffer_size
0
The size of socket send-buffer in bytes, 0 means using system default value.
transport.server_host
127.0.0.1
The server hostname or ip to listen on to transfer data.
transport.server_idle_timeout
360000
The max timeout(in ms) of server idle.
transport.server_port
0
The server port to listen on to transfer data. The system will assign a random port if it’s set to 0.
transport.server_threads
4
The number of transport threads for server.
transport.sync_request_timeout
10000
The timeout(in ms) to wait response after sending sync-request.
transport.tcp_keep_alive
true
Whether enable TCP keep-alive.
transport.transport_epoll_lt
false
Whether enable EPOLL level-trigger.
transport.write_buffer_high_mark
67108864
The high water mark for write buffer in bytes, it will trigger the sending unavailable if the number of queued bytes > write_buffer_high_mark.
transport.write_buffer_low_mark
33554432
The low water mark for write buffer in bytes, it will trigger the sending available if the number of queued bytes < write_buffer_low_mark.org.apache.hugegraph.config.OptionChecker$$Lambda$97/0x00000008001c8440@776a6d9b
transport.write_socket_timeout
3000
The timeout(in ms) to write data to socket buffer.
valuefile.max_segment_size
1073741824
The max number of bytes in each segment of value-file.
worker.combiner_class
org.apache.hugegraph.computer.core.config.Null
Combiner can combine messages into one value for a vertex, for example page-rank algorithm can combine messages of a vertex to a sum value.
worker.computation_class
org.apache.hugegraph.computer.core.config.Null
The class to create worker-computation object, worker-computation is used to compute each vertex in each superstep.
worker.data_dirs
[jobs]
The directories separated by ‘,’ that received vertices and messages can persist into.
The partitioner that decides which partition a vertex should be in, and which worker a partition should be in.
worker.received_buffers_bytes_limit
104857600
The limit bytes of buffers of received data, the total size of all buffers can’t excess this limit. If received buffers reach this limit, they will be merged into a file.
$ ./bin/start-hugegraph.sh
Starting HugeGraphServer...
-Connecting to HugeGraphServer (http://127.0.0.1:18080/graphs)...OK
+Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)...OK
Started [pid 21614]
查看创建的图:
curl http://127.0.0.1:8080/graphs/
diff --git a/cn/docs/config/config-guide/index.html b/cn/docs/config/config-guide/index.html
index a98b3e61d..d4f847f84 100644
--- a/cn/docs/config/config-guide/index.html
+++ b/cn/docs/config/config-guide/index.html
@@ -6,12 +6,12 @@
HugeGraphServer 内部集成了 GremlinServer 和 RestServer,而 gremlin-server.yaml 和 rest-server.properties 就是用来配置这两个 Server 的。
GremlinServer:GremlinServer 接受用户的 gremlin 语句,解析后转而调用 Core 的代码。 RestServer:提供 RESTful API,根据不同的 HTTP 请求,调用对应的 Core API,如果用户请求体是 gremlin 语句,则会转发给 GremlinServer,实现对图数据的操作。 下面对这三个配置文件逐一介绍。
2 gremlin-server.yaml gremlin-server.yaml 文件默认的内容如下:
-# host and port of gremlin server, need to be consistent with host and port in rest-server.properties #host: 127.0.0.1 #port: 8182 # Gremlin 查询中的超时时间(以毫秒为单位) evaluationTimeout: 30000 channelizer: org.apache.tinkerpop.gremlin.server.channel.WsAndHttpChannelizer # 不要在此处设置图形,此功能将在支持动态添加图形后再进行处理 graphs: { } scriptEngines: { gremlin-groovy: { staticImports: [ org.">{yesButton.disabled=!0,noButton.disabled=!0},sendFeedback=e=>{if(typeof ga!="function")return;const t={command:"send",hitType:"event",category:"Helpful",action:"click",label:window.location.pathname,value:e};ga(t.command,t.hitType,t.category,t.action,t.label,t.value)};yesButton.addEventListener("click",()=>{yesResponse.classList.add("feedback--response__visible"),disableButtons(),sendFeedback(1)}),noButton.addEventListener("click",()=>{noResponse.classList.add("feedback--response__visible"),disableButtons(),sendFeedback(0)})
可以使用 docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph 在启动的时候预加载一个内置的样例图。可以通过 RESTful API 进行验证。具体步骤可以参考 5.1.1
另外,如果我们希望能够在一个文件中管理除了 server 之外的其他 Hugegraph 相关的实例,我们也可以使用 docker-compose完成部署,使用命令 docker-compose up -d,(当然只配置 server 也是可以的)以下是一个样例的 docker-compose.yml:
# use the latest version, here is 1.0.0 for examplewget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz
tar zxf *hugegraph*.tar.gz
@@ -41,23 +42,25 @@
# enter the tool's packagecd *hugegraph*/*tool*
可以使用 docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph 在启动的时候预加载一个内置的样例图。可以通过 RESTful API 进行验证。具体步骤可以参考 5.1.1
另外,如果我们希望能够在一个文件中管理除了 server 之外的其他 Hugegraph 相关的实例,我们也可以使用 docker-compose完成部署,使用命令 docker-compose up -d,(当然只配置 server 也是可以的)以下是一个样例的 docker-compose.yml:
# use the latest version, here is 1.0.0 for examplewget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz
tar zxf *hugegraph*.tar.gz
@@ -65,23 +66,25 @@
# enter the tool's packagecd *hugegraph*/*tool*
Typical application scenarios of HugeGraph include deep relationship exploration, association analysis, path search, feature extraction, data clustering, community detection, knowledge graph, etc., and are applicable to business fields such as network security, telecommunication fraud, financial risk control, advertising recommendation, social network, and intelligence Robots, etc.
Features
HugeGraph supports graph operations in online and offline environments, supports batch import of data, supports efficient complex relationship analysis, and can be seamlessly integrated with big data platforms.
-HugeGraph supports multi-user parallel operations. Users can enter Gremlin query statements and get graph query results in time. They can also call HugeGraph API in user programs for graph analysis or query.
This system has the following features:
Ease of use: HugeGraph supports Gremlin graph query language and RESTful API, provides common interfaces for graph retrieval, and has peripheral tools with complete functions to easily implement various graph-based query and analysis operations.
Efficiency: HugeGraph has been deeply optimized in graph storage and graph computing, and provides a variety of batch import tools, which can easily complete the rapid import of tens of billions of data, and achieve millisecond-level response for graph retrieval through optimized queries. Supports simultaneous online real-time operations of thousands of users.
Universal: HugeGraph supports the Apache Gremlin standard graph query language and the Property Graph standard graph modeling method, and supports graph-based OLTP and OLAP schemes. Integrate Apache Hadoop and Apache Spark big data platforms.
Scalable: supports distributed storage, multiple copies of data, and horizontal expansion, built-in multiple back-end storage engines, and can easily expand the back-end storage engine through plug-ins.
Open: HugeGraph code is open source (Apache 2 License), customers can modify and customize independently, and selectively give back to the open-source community.
The functions of this system include but are not limited to:
Supports batch import of data from multiple data sources (including local files, HDFS files, MySQL databases, and other data sources), and supports import of multiple file formats (including TXT, CSV, JSON, and other formats)
With a visual operation interface, it can be used for operation, analysis, and display diagrams, reducing the threshold for users to use
Optimized graph interface: shortest path (Shortest Path), K-step connected subgraph (K-neighbor), K-step to reach the adjacent point (K-out), personalized recommendation algorithm PersonalRank, etc.
Implemented based on Apache TinkerPop3 framework, supports Gremlin graph query language
Support attribute graph, attributes can be added to vertices and edges, and support rich attribute types
Has independent schema metadata information, has powerful graph modeling capabilities, and facilitates third-party system integration
Support multi-vertex ID strategy: support primary key ID, support automatic ID generation, support user-defined string ID, support user-defined digital ID
The attributes of edges and vertices can be indexed to support precise query, range query, and full-text search
The storage system adopts plug-in mode, supporting RocksDB, Cassandra, ScyllaDB, HBase, MySQL, PostgreSQL, Palo, and InMemory, etc.
Integrate with big data systems such as Hadoop and Spark GraphX, and support Bulk Load operations
Support high availability HA, multiple copies of data, backup recovery, monitoring, etc.
Modules
HugeGraph-Server: HugeGraph-Server is the core part of the HugeGraph project, including submodules such as Core, Backend, and API;
Core: Graph engine implementation, connecting the Backend module downward and supporting the API module upward;
Backend: Realize the storage of graph data to the backend. The supported backends include: Memory, Cassandra, ScyllaDB, RocksDB, HBase, MySQL, and PostgreSQL. Users can choose one according to the actual situation;
API: Built-in REST Server, provides RESTful API to users, and is fully compatible with Gremlin query.
HugeGraph-Client: HugeGraph-Client provides a RESTful API client for connecting to HugeGraph-Server. Currently, only Java version is implemented. Users of other languages can implement it by themselves;
HugeGraph-Loader: HugeGraph-Loader is a data import tool based on HugeGraph-Client, which converts ordinary text data into graph vertices and edges and inserts them into graph database;
HugeGraph-Computer: HugeGraph-Computer is a distributed graph processing system for HugeGraph (OLAP). It is an implementation of Pregel. It runs on the Kubernetes framework;
HugeGraph-Hubble: HugeGraph-Hubble is HugeGraph’s web visualization management platform, a one-stop visual analysis platform. The platform covers the whole process from data modeling, to rapid data import, to online and offline analysis of data, and unified management of graphs;
HugeGraph-Tools: HugeGraph-Tools is HugeGraph’s deployment and management tools, including functions such as managing graphs, backup/restore, Gremlin execution, etc.
Contact Us
GitHub Issues: Feedback on usage issues and functional requirements (quick response)
Note: The latest graph analysis and display platform is Hubble, which supports server v0.10 +.
3 - Quick Start
3.1 - HugeGraph-Server Quick Start
1 HugeGraph-Server Overview
HugeGraph-Server is the core part of the HugeGraph Project, contains submodules such as Core, Backend, API.
The Core Module is an implementation of the Tinkerpop interface; The Backend module is used to save the graph data to the data store, currently supported backends include: Memory, Cassandra, ScyllaDB, RocksDB; The API Module provides HTTP Server, which converts Client’s HTTP request into a call to Core Module.
There will be two spellings HugeGraph-Server and HugeGraphServer in the document, and other modules are similar. There is no big difference in the meaning of these two ways of writing, which can be distinguished as follows: HugeGraph-Server represents the code of server-related components, HugeGraphServer represents the service process.
2 Dependency for Building/Running
2.1 Install Java 11 (JDK 11)
Consider use Java 11 to run HugeGraph-Server (also compatible with Java 8 now), and configure by yourself.
Be sure to execute the java -version command to check the jdk version before reading
3 Deploy
There are four ways to deploy HugeGraph-Server components:
We can use docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph to quickly start an inner HugeGraph server with RocksDB in background.
Optional:
use docker exec -it graph bash to enter the container to do some operations.
use docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph to start with a built-in example graph.
Also, we can use docker-compose to deploy, with docker-compose up -d. Here is an example docker-compose.yml:
version:'3'
+HugeGraph supports multi-user parallel operations. Users can enter Gremlin query statements and get graph query results in time. They can also call HugeGraph API in user programs for graph analysis or query.
This system has the following features:
Ease of use: HugeGraph supports Gremlin graph query language and RESTful API, provides common interfaces for graph retrieval, and has peripheral tools with complete functions to easily implement various graph-based query and analysis operations.
Efficiency: HugeGraph has been deeply optimized in graph storage and graph computing, and provides a variety of batch import tools, which can easily complete the rapid import of tens of billions of data, and achieve millisecond-level response for graph retrieval through optimized queries. Supports simultaneous online real-time operations of thousands of users.
Universal: HugeGraph supports the Apache Gremlin standard graph query language and the Property Graph standard graph modeling method, and supports graph-based OLTP and OLAP schemes. Integrate Apache Hadoop and Apache Spark big data platforms.
Scalable: supports distributed storage, multiple copies of data, and horizontal expansion, built-in multiple back-end storage engines, and can easily expand the back-end storage engine through plug-ins.
Open: HugeGraph code is open source (Apache 2 License), customers can modify and customize independently, and selectively give back to the open-source community.
The functions of this system include but are not limited to:
Supports batch import of data from multiple data sources (including local files, HDFS files, MySQL databases, and other data sources), and supports import of multiple file formats (including TXT, CSV, JSON, and other formats)
With a visual operation interface, it can be used for operation, analysis, and display diagrams, reducing the threshold for users to use
Optimized graph interface: shortest path (Shortest Path), K-step connected subgraph (K-neighbor), K-step to reach the adjacent point (K-out), personalized recommendation algorithm PersonalRank, etc.
Implemented based on Apache TinkerPop3 framework, supports Gremlin graph query language
Support attribute graph, attributes can be added to vertices and edges, and support rich attribute types
Has independent schema metadata information, has powerful graph modeling capabilities, and facilitates third-party system integration
Support multi-vertex ID strategy: support primary key ID, support automatic ID generation, support user-defined string ID, support user-defined digital ID
The attributes of edges and vertices can be indexed to support precise query, range query, and full-text search
The storage system adopts plug-in mode, supporting RocksDB, Cassandra, ScyllaDB, HBase, MySQL, PostgreSQL, Palo, and InMemory, etc.
Integrate with big data systems such as Hadoop and Spark GraphX, and support Bulk Load operations
Support high availability HA, multiple copies of data, backup recovery, monitoring, etc.
Modules
HugeGraph-Server: HugeGraph-Server is the core part of the HugeGraph project, including submodules such as Core, Backend, and API;
Core: Graph engine implementation, connecting the Backend module downward and supporting the API module upward;
Backend: Realize the storage of graph data to the backend. The supported backends include: Memory, Cassandra, ScyllaDB, RocksDB, HBase, MySQL, and PostgreSQL. Users can choose one according to the actual situation;
API: Built-in REST Server, provides RESTful API to users, and is fully compatible with Gremlin query.
HugeGraph-Client: HugeGraph-Client provides a RESTful API client for connecting to HugeGraph-Server. Currently, only Java version is implemented. Users of other languages can implement it by themselves;
HugeGraph-Loader: HugeGraph-Loader is a data import tool based on HugeGraph-Client, which converts ordinary text data into graph vertices and edges and inserts them into graph database;
HugeGraph-Computer: HugeGraph-Computer is a distributed graph processing system for HugeGraph (OLAP). It is an implementation of Pregel. It runs on the Kubernetes framework;
HugeGraph-Hubble: HugeGraph-Hubble is HugeGraph’s web visualization management platform, a one-stop visual analysis platform. The platform covers the whole process from data modeling, to rapid data import, to online and offline analysis of data, and unified management of graphs;
HugeGraph-Tools: HugeGraph-Tools is HugeGraph’s deployment and management tools, including functions such as managing graphs, backup/restore, Gremlin execution, etc.
Contact Us
GitHub Issues: Feedback on usage issues and functional requirements (quick response)
Note: The latest graph analysis and display platform is Hubble, which supports server v0.10 +.
3 - Quick Start
3.1 - HugeGraph-Server Quick Start
1 HugeGraph-Server Overview
HugeGraph-Server is the core part of the HugeGraph Project, contains submodules such as Core, Backend, API.
The Core Module is an implementation of the Tinkerpop interface; The Backend module is used to save the graph data to the data store, currently supported backends include: Memory, Cassandra, ScyllaDB, RocksDB; The API Module provides HTTP Server, which converts Client’s HTTP request into a call to Core Module.
There will be two spellings HugeGraph-Server and HugeGraphServer in the document, and other modules are similar. There is no big difference in the meaning of these two ways of writing, which can be distinguished as follows: HugeGraph-Server represents the code of server-related components, HugeGraphServer represents the service process.
2 Dependency for Building/Running
2.1 Install Java 11 (JDK 11)
Consider use Java 11 to run HugeGraph-Server (also compatible with Java 8 now), and configure by yourself.
Be sure to execute the java -version command to check the jdk version before reading
3 Deploy
There are four ways to deploy HugeGraph-Server components:
We can use docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph to quickly start an inner HugeGraph server with RocksDB in background.
Optional:
use docker exec -it graph bash to enter the container to do some operations.
use docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph to start with a built-in example graph. We can use RESTful API to verify the result. The detailed step can refer to 5.1.1
Also, if we want to manage the other Hugegraph related instances in one file, we can use docker-compose to deploy, with the command docker-compose up -d (you can config only server). Here is an example docker-compose.yml:
version:'3'services:graph:image:hugegraph/hugegraph
-#environment:
+# environment:# - PRELOAD=true
+# PRELOAD is a option to preload a build-in sample graph when initializing.ports:
-- 18080:8080
+- 8080:8080
3.2 Download the binary tar tarball
You could download the binary tarball from the download page of ASF site like this:
# use the latest version, here is 1.0.0 for examplewget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz
tar zxf *hugegraph*.tar.gz
@@ -61,7 +62,7 @@
cd *hugegraph*/*tool*
note: ${version} is the version, The latest version can refer to Download Page, or click the link to download directly from the Download page
The general entry script for HugeGraph-Tools is bin/hugegraph, Users can use the help command to view its usage, here only the commands for one-click deployment are introduced.
{hugegraph-version} indicates the version of HugeGraphServer and HugeGraphStudio to be deployed, users can view the conf/version-mapping.yaml file for version information, {install-path} specify the installation directory of HugeGraphServer and HugeGraphStudio, {download-path-prefix} optional, specify the download address of HugeGraphServer and HugeGraphStudio tarball, use default download URL if not provided, for example, to start HugeGraph-Server and HugeGraphStudio version 0.6, write the above command as bin/hugegraph deploy -v 0.6 -p services.
4 Config
If you need to quickly start HugeGraph just for testing, then you only need to modify a few configuration items (see next section).
-for detailed configuration introduction, please refer to configuration document and introduction to configuration items
5 Startup
5.1 Use Docker to startup
In 3.1 Use Docker container, we have introduced how to use docker to deploy hugegraph-server. server can also preload an example graph by setting the parameter.
5.1.1 Create example graph when starting server
Set the environment variable PRELOAD=true when starting Docker in order to load data during the execution of the startup script.
Use docker run
Use docker run -itd --name=graph -p 18080:8080 -e PRELOAD=true hugegraph/hugegraph:latest
In 3.1 Use Docker container, we have introduced how to use docker to deploy hugegraph-server. server can also preload an example graph by setting the parameter.
5.1.1 Create example graph when starting server
Set the environment variable PRELOAD=true when starting Docker in order to load data during the execution of the startup script.
Use docker run
Use docker run -itd --name=graph -p 8080:8080 -e PRELOAD=true hugegraph/hugegraph:latest
Use docker-compose
Create docker-compose.yml as following. We should set the environment variable PRELOAD=true. example.groovy is a predefined script to preload the sample data. If needed, we can mount a new example.groovy to change the preload data.
Since the scylladb database itself is an “optimized version” based on cassandra, if the user does not have scylladb installed, they can also use cassandra as the backend storage directly. They only need to change the backend and serializer to scylladb, and the host and post point to the seeds and port of the cassandra cluster. Yes, but it is not recommended to do so, it will not take advantage of scylladb itself.
Initialize the database (required only on first startup)
cd *hugegraph-${version}
+
Since the scylladb database itself is an “optimized version” based on cassandra, if the user does not have scylladb installed, they can also use cassandra as the backend storage directly. They only need to change the backend and serializer to scylladb, and the host and post point to the seeds and port of the cassandra cluster. Yes, but it is not recommended to do so, it will not take advantage of scylladb itself.
Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)
HugeGraph-Loader is the data import component of HugeGraph, which can convert data from various data sources into graph vertices and edges and import them into the graph database in batches.
Currently supported data sources include:
Local disk file or directory, supports TEXT, CSV and JSON format files, supports compressed files
HDFS file or directory, supports compressed files
Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server
Local disk files and HDFS files support resumable uploads.
It will be explained in detail below.
Note: HugeGraph-Loader requires HugeGraph Server service, please refer to HugeGraph-Server Quick Start to download and start Server
2 Get HugeGraph-Loader
There are two ways to get HugeGraph-Loader:
Download the compiled tarball
Clone source code then compile and install
2.1 Download the compiled archive
Download the latest version of the HugeGraph-Toolchain release package:
HugeGraph is an analysis-oriented graph database system that supports batch operations, which fully supports Apache TinkerPop3 framework and Gremlin graph query language. It provides a complete tool chain ecology such as export, backup, and recovery, and effectively solve the storage, query and correlation analysis needs of massive graph data. HugeGraph is widely used in the fields of risk control, insurance claims, recommendation search, public security crime crackdown, knowledge graph, network security, IT operation and maintenance of bank securities companies, and is committed to allowing more industries, organizations and users to enjoy a wider range of data comprehensive value.
HugeGraph-Hubble is HugeGraph’s one-stop visual analysis platform. The platform covers the whole process from data modeling, to efficient data import, to real-time and offline analysis of data, and unified management of graphs, realizing the whole process wizard of graph application. It is designed to improve the user’s use fluency, lower the user’s use threshold, and provide a more efficient and easy-to-use user experience.
The platform mainly includes the following modules:
Graph Management
The graph management module realizes the unified management of multiple graphs and graph access, editing, deletion, and query by creating graph and connecting the platform and graph data.
Metadata Modeling
The metadata modeling module realizes the construction and management of graph models by creating attribute libraries, vertex types, edge types, and index types. The platform provides two modes, list mode and graph mode, which can display the metadata model in real time, which is more intuitive. At the same time, it also provides a metadata reuse function across graphs, which saves the tedious and repetitive creation process of the same metadata, greatly improves modeling efficiency and enhances ease of use.
Data Import
Data import is to convert the user’s business data into the vertices and edges of the graph and insert it into the graph database. The platform provides a wizard-style visual import module. By creating import tasks, the management of import tasks and the parallel operation of multiple import tasks are realized. Improve import performance. After entering the import task, you only need to follow the platform step prompts, upload files as needed, and fill in the content to easily implement the import process of graph data. At the same time, it supports breakpoint resuming, error retry mechanism, etc., which reduces import costs and improves efficiency.
Graph Analysis
By inputting the graph traversal language Gremlin, high-performance general analysis of graph data can be realized, and functions such as customized multidimensional path query of vertices can be provided, and three kinds of graph result display methods are provided, including: graph form, table form, Json form, and multidimensional display. The data form meets the needs of various scenarios used by users. It provides functions such as running records and collection of common statements, realizing the traceability of graph operations, and the reuse and sharing of query input, which is fast and efficient. It supports the export of graph data, and the export format is Json format.
Task Management
For Gremlin tasks that need to traverse the whole graph, index creation and reconstruction and other time-consuming asynchronous tasks, the platform provides corresponding task management functions to achieve unified management and result viewing of asynchronous tasks.
2 Deploy
There are three ways to deplot hugegraph-hubble
Use Docker (recommended)
Download the Toolchain binary package
Source code compilation
2.1 Use docker (recommended)
Special Note: If you are starting hubble with Docker, and hubble and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to localhost/127.0.0.1. This will refer to the hubble container internally rather than the host machine, resulting in a connection failure to the server. If hubble and server is in the same docker network, you can use the container_name as the hostname, and 8080 as the port. Or you can use the ip of the host as the hostname, and the port is configured by the host for the server.
We can use docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble to quick start hubble.
Alternatively, you can use Docker Compose to start hubble. Additionally, if hubble and the graph are in the same Docker network, you can access the graph using the container name of the graph, eliminating the need for the host machine’s IP address.
Use docker-compose up -d,docker-compose.yml is following:
version:'3'
+
3.3 - HugeGraph-Hubble Quick Start
1 HugeGraph-Hubble Overview
HugeGraph is an analysis-oriented graph database system that supports batch operations, which fully supports Apache TinkerPop3 framework and Gremlin graph query language. It provides a complete tool chain ecology such as export, backup, and recovery, and effectively solve the storage, query and correlation analysis needs of massive graph data. HugeGraph is widely used in the fields of risk control, insurance claims, recommendation search, public security crime crackdown, knowledge graph, network security, IT operation and maintenance of bank securities companies, and is committed to allowing more industries, organizations and users to enjoy a wider range of data comprehensive value.
HugeGraph-Hubble is HugeGraph’s one-stop visual analysis platform. The platform covers the whole process from data modeling, to efficient data import, to real-time and offline analysis of data, and unified management of graphs, realizing the whole process wizard of graph application. It is designed to improve the user’s use fluency, lower the user’s use threshold, and provide a more efficient and easy-to-use user experience.
The platform mainly includes the following modules:
Graph Management
The graph management module realizes the unified management of multiple graphs and graph access, editing, deletion, and query by creating graph and connecting the platform and graph data.
Metadata Modeling
The metadata modeling module realizes the construction and management of graph models by creating attribute libraries, vertex types, edge types, and index types. The platform provides two modes, list mode and graph mode, which can display the metadata model in real time, which is more intuitive. At the same time, it also provides a metadata reuse function across graphs, which saves the tedious and repetitive creation process of the same metadata, greatly improves modeling efficiency and enhances ease of use.
Data Import
Data import is to convert the user’s business data into the vertices and edges of the graph and insert it into the graph database. The platform provides a wizard-style visual import module. By creating import tasks, the management of import tasks and the parallel operation of multiple import tasks are realized. Improve import performance. After entering the import task, you only need to follow the platform step prompts, upload files as needed, and fill in the content to easily implement the import process of graph data. At the same time, it supports breakpoint resuming, error retry mechanism, etc., which reduces import costs and improves efficiency.
Graph Analysis
By inputting the graph traversal language Gremlin, high-performance general analysis of graph data can be realized, and functions such as customized multidimensional path query of vertices can be provided, and three kinds of graph result display methods are provided, including: graph form, table form, Json form, and multidimensional display. The data form meets the needs of various scenarios used by users. It provides functions such as running records and collection of common statements, realizing the traceability of graph operations, and the reuse and sharing of query input, which is fast and efficient. It supports the export of graph data, and the export format is Json format.
Task Management
For Gremlin tasks that need to traverse the whole graph, index creation and reconstruction and other time-consuming asynchronous tasks, the platform provides corresponding task management functions to achieve unified management and result viewing of asynchronous tasks.
2 Deploy
There are three ways to deplot hugegraph-hubble
Use Docker (recommended)
Download the Toolchain binary package
Source code compilation
2.1 Use docker (recommended)
Special Note: If you are starting hubble with Docker, and hubble and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to localhost/127.0.0.1. This will refer to the hubble container internally rather than the host machine, resulting in a connection failure to the server.
If hubble and server is in the same docker network, we recommend using the container_name (in our example, it is graph) as the hostname, and 8080 as the port. Or you can use the host IP as the hostname, and the port is configured by the host for the server.
We can use docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble to quick start hubble.
Alternatively, you can use Docker Compose to start hubble. Additionally, if hubble and the graph are in the same Docker network, you can access the graph using the container name of the graph, eliminating the need for the host machine’s IP address.
Use docker-compose up -d,docker-compose.yml is following:
The module usage process of the platform is as follows:
4 Platform Instructions
4.1 Graph Management
4.1.1 Graph creation
Under the graph management module, click [Create graph], and realize the connection of multiple graphs by filling in the graph ID, graph name, host name, port number, username, and password information.
Create graph by filling in the content as follows::
4.1.2 Graph Access
Realize the information access of the graph space. After entering, you can perform operations such as multidimensional query analysis, metadata management, data import, and algorithm analysis of the graph.
4.1.3 Graph management
Users can achieve unified management of graphs through overview, search, and information editing and deletion of single graphs.
Search range: You can search for the graph name and ID.
4.2 Metadata Modeling (list + graph mode)
4.2.1 Module entry
Left navigation:
4.2.2 Property type
4.2.2.1 Create type
Fill in or select the attribute name, data type, and cardinality to complete the creation of the attribute.
Created attributes can be used as attributes of vertex type and edge type.
List mode:
Graph mode:
4.2.2.2 Reuse
The platform provides the [Reuse] function, which can directly reuse the metadata of other graphs.
Select the graph ID that needs to be reused, and continue to select the attributes that need to be reused. After that, the platform will check whether there is a conflict. After passing, the metadata can be reused.
Select reuse items:
Check reuse items:
4.2.2.3 Management
You can delete a single item or delete it in batches in the attribute list.
4.2.3 Vertex type
4.2.3.1 Create type
Fill in or select the vertex type name, ID strategy, association attribute, primary key attribute, vertex style, content displayed below the vertex in the query result, and index information: including whether to create a type index, and the specific content of the attribute index, complete the vertex Type creation.
List mode:
Graph mode:
4.2.3.2 Reuse
The multiplexing of vertex types will reuse the attributes and attribute indexes associated with this type together.
The reuse method is similar to the property reuse, see 3.2.2.2.
4.2.3.3 Administration
Editing operations are available. The vertex style, association type, vertex display content, and attribute index can be edited, and the rest cannot be edited.
You can delete a single item or delete it in batches.
4.2.4 Edge Types
4.2.4.1 Create
Fill in or select the edge type name, start point type, end point type, associated attributes, whether to allow multiple connections, edge style, content displayed below the edge in the query result, and index information: including whether to create a type index, and attribute index The specific content, complete the creation of the edge type.
List mode:
Graph mode:
4.2.4.2 Reuse
The reuse of the edge type will reuse the start point type, end point type, associated attribute and attribute index of this type.
The reuse method is similar to the property reuse, see 3.2.2.2.
4.2.4.3 Administration
Editing operations are available. Edge styles, associated attributes, edge display content, and attribute indexes can be edited, and the rest cannot be edited, the same as the vertex type.
You can delete a single item or delete it in batches.
4.2.5 Index Types
Displays vertex and edge indices for vertex types and edge types.
4.3 Data Import
The usage process of data import is as follows:
4.3.1 Module entrance
Left navigation:
4.3.2 Create task
Fill in the task name and remarks (optional) to create an import task.
Multiple import tasks can be created and imported in parallel.
4.3.3 Uploading files
Upload the file that needs to be composed. The currently supported format is CSV, which will be updated continuously in the future.
Multiple files can be uploaded at the same time.
4.3.4 Setting up data mapping
Set up data mapping for uploaded files, including file settings and type settings
File settings: Check or fill in whether to include the header, separator, encoding format and other settings of the file itself, all set the default values, no need to fill in manually
Type setting:
Vertex map and edge map:
【Vertex Type】: Select the vertex type, and upload the column data in the file for its ID mapping;
【Edge Type】: Select the edge type and map the column data of the uploaded file to the ID column of its start point type and end point type;
Mapping settings: upload the column data in the file for the attribute mapping of the selected vertex type. Here, if the attribute name is the same as the header name of the file, the mapping attribute can be automatically matched, and there is no need to manually fill in the selection.
After completing the setting, the setting list will be displayed before proceeding to the next step. It supports the operations of adding, editing and deleting mappings.
Fill in the settings map:
Mapping list:
4.3.5 Import data
Before importing, you need to fill in the import setting parameters. After filling in, you can start importing data into the gallery.
Import settings
The import setting parameter items are as shown in the figure below, all set the default value, no need to fill in manually
Import details
Click Start Import to start the file import task
The import details provide the mapping type, import speed, import progress, time-consuming and the specific status of the current task set for each uploaded file, and can pause, resume, stop and other operations for each task
If the import fails, you can view the specific reason
4.4 Data Analysis
4.4.1 Module entry
Left navigation:
4.4.2 Multi-image switching
By switching the entrance on the left, flexibly switch the operation space of multiple graphs
4.4.3 Graph Analysis and Processing
HugeGraph supports Gremlin, a graph traversal query language of Apache TinkerPop3. Gremlin is a general graph database query language. By entering Gremlin statements and clicking execute, you can perform query and analysis operations on graph data, and create and delete vertices/edges. , vertex/edge attribute modification, etc.
After Gremlin query, below is the graph result display area, which provides 3 kinds of graph result display modes: [Graph Mode], [Table Mode], [Json Mode].
Support zoom, center, full screen, export and other operations.
【Picture Mode】
【Table mode】
【Json mode】
4.4.4 Data Details
Click the vertex/edge entity to view the data details of the vertex/edge, including: vertex/edge type, vertex ID, attribute and corresponding value, expand the information display dimension of the graph, and improve the usability.
4.4.5 Multidimensional Path Query of Graph Results
In addition to the global query, in-depth customized query and hidden operations can be performed for the vertices in the query result to realize customized mining of graph results.
Right-click a vertex, and the menu entry of the vertex appears, which can be displayed, inquired, hidden, etc.
Expand: Click to display the vertices associated with the selected point.
Query: By selecting the edge type and edge direction associated with the selected point, and then selecting its attributes and corresponding filtering rules under this condition, a customized path display can be realized.
Hide: When clicked, hides the selected point and its associated edges.
Double-clicking a vertex also displays the vertex associated with the selected point.
4.4.6 Add vertex/edge
4.4.6.1 Added vertex
In the graph area, two entries can be used to dynamically add vertices, as follows:
Click on the graph area panel, the Add Vertex entry appears
Click the first icon in the action bar in the upper right corner
Complete the addition of vertices by selecting or filling in the vertex type, ID value, and attribute information.
The entry is as follows:
Add the vertex content as follows:
4.4.6.2 Add edge
Right-click a vertex in the graph result to add the outgoing or incoming edge of that point.
4.4.7 Execute the query of records and favorites
Record each query record at the bottom of the graph area, including: query time, execution type, content, status, time-consuming, as well as [collection] and [load] operations, to achieve a comprehensive record of graph execution, with traces to follow, and Can quickly load and reuse execution content
Provides the function of collecting sentences, which can be used to collect frequently used sentences, which is convenient for fast calling of high-frequency sentences.
4.5 Task Management
4.5.1 Module entry
Left navigation:
4.5.2 Task Management
Provide unified management and result viewing of asynchronous tasks. There are 4 types of asynchronous tasks, namely:
gremlin: Gremlin tasks
algorithm: OLAP algorithm task
remove_schema: remove metadata
rebuild_index: rebuild the index
The list displays the asynchronous task information of the current graph, including: task ID, task name, task type, creation time, time-consuming, status, operation, and realizes the management of asynchronous tasks.
Support filtering by task type and status
Support searching for task ID and task name
Asynchronous tasks can be deleted or deleted in batches
4.5.3 Gremlin asynchronous tasks
Create a task
The data analysis module currently supports two Gremlin operations, Gremlin query and Gremlin task; if the user switches to the Gremlin task, after clicking execute, an asynchronous task will be created in the asynchronous task center;
Task submission
After the task is submitted successfully, the graph area returns the submission result and task ID
Mission details
Provide [View] entry, you can jump to the task details to view the specific execution of the current task After jumping to the task center, the currently executing task line will be displayed directly
Click to view the entry to jump to the task management list, as follows:
View the results
The results are displayed in the form of json
4.5.4 OLAP algorithm tasks
There is no visual OLAP algorithm execution on Hubble. You can call the RESTful API to perform OLAP algorithm tasks, find the corresponding tasks by ID in the task management, and view the progress and results.
4.5.5 Delete metadata, rebuild index
Create a task
In the metadata modeling module, when deleting metadata, an asynchronous task for deleting metadata can be created
When editing an existing vertex/edge type operation, when adding an index, an asynchronous task of creating an index can be created
Task details
After confirming/saving, you can jump to the task center to view the details of the current task
3.4 - HugeGraph-Client Quick Start
1 Overview Of Hugegraph
HugeGraph-Client sends HTTP request to HugeGraph-Server to obtain and parse the execution result of Server. Currently only the HugeGraph-Client for Java is provided. You can use HugeGraph-Client to write Java code to operate HugeGraph, such as adding, deleting, modifying, and querying schema and graph data, or executing gremlin statements.
2 What You Need
Java 11 (also support Java 8)
Maven 3.5+
3 How To Use
The basic steps to use HugeGraph-Client are as follows:
Build a new Maven project by IDEA or Eclipse
Add HugeGraph-Client dependency in pom file;
Create an object to invoke the interface of HugeGraph-Client
See the complete example in the following section for the detail.
The module usage process of the platform is as follows:
4 Platform Instructions
4.1 Graph Management
4.1.1 Graph creation
Under the graph management module, click [Create graph], and realize the connection of multiple graphs by filling in the graph ID, graph name, host name, port number, username, and password information.
Create graph by filling in the content as follows:
Special Note: If you are starting hubble with Docker, and hubble and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to localhost/127.0.0.1. If hubble and server is in the same docker network, we recommend using the container_name (in our example, it is graph) as the hostname, and 8080 as the port. Or you can use the host IP as the hostname, and the port is configured by the host for the server.
4.1.2 Graph Access
Realize the information access of the graph space. After entering, you can perform operations such as multidimensional query analysis, metadata management, data import, and algorithm analysis of the graph.
4.1.3 Graph management
Users can achieve unified management of graphs through overview, search, and information editing and deletion of single graphs.
Search range: You can search for the graph name and ID.
4.2 Metadata Modeling (list + graph mode)
4.2.1 Module entry
Left navigation:
4.2.2 Property type
4.2.2.1 Create type
Fill in or select the attribute name, data type, and cardinality to complete the creation of the attribute.
Created attributes can be used as attributes of vertex type and edge type.
List mode:
Graph mode:
4.2.2.2 Reuse
The platform provides the [Reuse] function, which can directly reuse the metadata of other graphs.
Select the graph ID that needs to be reused, and continue to select the attributes that need to be reused. After that, the platform will check whether there is a conflict. After passing, the metadata can be reused.
Select reuse items:
Check reuse items:
4.2.2.3 Management
You can delete a single item or delete it in batches in the attribute list.
4.2.3 Vertex type
4.2.3.1 Create type
Fill in or select the vertex type name, ID strategy, association attribute, primary key attribute, vertex style, content displayed below the vertex in the query result, and index information: including whether to create a type index, and the specific content of the attribute index, complete the vertex Type creation.
List mode:
Graph mode:
4.2.3.2 Reuse
The multiplexing of vertex types will reuse the attributes and attribute indexes associated with this type together.
The reuse method is similar to the property reuse, see 3.2.2.2.
4.2.3.3 Administration
Editing operations are available. The vertex style, association type, vertex display content, and attribute index can be edited, and the rest cannot be edited.
You can delete a single item or delete it in batches.
4.2.4 Edge Types
4.2.4.1 Create
Fill in or select the edge type name, start point type, end point type, associated attributes, whether to allow multiple connections, edge style, content displayed below the edge in the query result, and index information: including whether to create a type index, and attribute index The specific content, complete the creation of the edge type.
List mode:
Graph mode:
4.2.4.2 Reuse
The reuse of the edge type will reuse the start point type, end point type, associated attribute and attribute index of this type.
The reuse method is similar to the property reuse, see 3.2.2.2.
4.2.4.3 Administration
Editing operations are available. Edge styles, associated attributes, edge display content, and attribute indexes can be edited, and the rest cannot be edited, the same as the vertex type.
You can delete a single item or delete it in batches.
4.2.5 Index Types
Displays vertex and edge indices for vertex types and edge types.
4.3 Data Import
Note:currently, we recommend to use hugegraph-loader to import data formally. The built-in import of hubble is used for testing and getting started.
The usage process of data import is as follows:
4.3.1 Module entrance
Left navigation:
4.3.2 Create task
Fill in the task name and remarks (optional) to create an import task.
Multiple import tasks can be created and imported in parallel.
4.3.3 Uploading files
Upload the file that needs to be composed. The currently supported format is CSV, which will be updated continuously in the future.
Multiple files can be uploaded at the same time.
4.3.4 Setting up data mapping
Set up data mapping for uploaded files, including file settings and type settings
File settings: Check or fill in whether to include the header, separator, encoding format and other settings of the file itself, all set the default values, no need to fill in manually
Type setting:
Vertex map and edge map:
【Vertex Type】: Select the vertex type, and upload the column data in the file for its ID mapping;
【Edge Type】: Select the edge type and map the column data of the uploaded file to the ID column of its start point type and end point type;
Mapping settings: upload the column data in the file for the attribute mapping of the selected vertex type. Here, if the attribute name is the same as the header name of the file, the mapping attribute can be automatically matched, and there is no need to manually fill in the selection.
After completing the setting, the setting list will be displayed before proceeding to the next step. It supports the operations of adding, editing and deleting mappings.
Fill in the settings map:
Mapping list:
4.3.5 Import data
Before importing, you need to fill in the import setting parameters. After filling in, you can start importing data into the gallery.
Import settings
The import setting parameter items are as shown in the figure below, all set the default value, no need to fill in manually
Import details
Click Start Import to start the file import task
The import details provide the mapping type, import speed, import progress, time-consuming and the specific status of the current task set for each uploaded file, and can pause, resume, stop and other operations for each task
If the import fails, you can view the specific reason
4.4 Data Analysis
4.4.1 Module entry
Left navigation:
4.4.2 Multi-image switching
By switching the entrance on the left, flexibly switch the operation space of multiple graphs
4.4.3 Graph Analysis and Processing
HugeGraph supports Gremlin, a graph traversal query language of Apache TinkerPop3. Gremlin is a general graph database query language. By entering Gremlin statements and clicking execute, you can perform query and analysis operations on graph data, and create and delete vertices/edges. , vertex/edge attribute modification, etc.
After Gremlin query, below is the graph result display area, which provides 3 kinds of graph result display modes: [Graph Mode], [Table Mode], [Json Mode].
Support zoom, center, full screen, export and other operations.
【Picture Mode】
【Table mode】
【Json mode】
4.4.4 Data Details
Click the vertex/edge entity to view the data details of the vertex/edge, including: vertex/edge type, vertex ID, attribute and corresponding value, expand the information display dimension of the graph, and improve the usability.
4.4.5 Multidimensional Path Query of Graph Results
In addition to the global query, in-depth customized query and hidden operations can be performed for the vertices in the query result to realize customized mining of graph results.
Right-click a vertex, and the menu entry of the vertex appears, which can be displayed, inquired, hidden, etc.
Expand: Click to display the vertices associated with the selected point.
Query: By selecting the edge type and edge direction associated with the selected point, and then selecting its attributes and corresponding filtering rules under this condition, a customized path display can be realized.
Hide: When clicked, hides the selected point and its associated edges.
Double-clicking a vertex also displays the vertex associated with the selected point.
4.4.6 Add vertex/edge
4.4.6.1 Added vertex
In the graph area, two entries can be used to dynamically add vertices, as follows:
Click on the graph area panel, the Add Vertex entry appears
Click the first icon in the action bar in the upper right corner
Complete the addition of vertices by selecting or filling in the vertex type, ID value, and attribute information.
The entry is as follows:
Add the vertex content as follows:
4.4.6.2 Add edge
Right-click a vertex in the graph result to add the outgoing or incoming edge of that point.
4.4.7 Execute the query of records and favorites
Record each query record at the bottom of the graph area, including: query time, execution type, content, status, time-consuming, as well as [collection] and [load] operations, to achieve a comprehensive record of graph execution, with traces to follow, and Can quickly load and reuse execution content
Provides the function of collecting sentences, which can be used to collect frequently used sentences, which is convenient for fast calling of high-frequency sentences.
4.5 Task Management
4.5.1 Module entry
Left navigation:
4.5.2 Task Management
Provide unified management and result viewing of asynchronous tasks. There are 4 types of asynchronous tasks, namely:
gremlin: Gremlin tasks
algorithm: OLAP algorithm task
remove_schema: remove metadata
rebuild_index: rebuild the index
The list displays the asynchronous task information of the current graph, including: task ID, task name, task type, creation time, time-consuming, status, operation, and realizes the management of asynchronous tasks.
Support filtering by task type and status
Support searching for task ID and task name
Asynchronous tasks can be deleted or deleted in batches
4.5.3 Gremlin asynchronous tasks
Create a task
The data analysis module currently supports two Gremlin operations, Gremlin query and Gremlin task; if the user switches to the Gremlin task, after clicking execute, an asynchronous task will be created in the asynchronous task center;
Task submission
After the task is submitted successfully, the graph area returns the submission result and task ID
Mission details
Provide [View] entry, you can jump to the task details to view the specific execution of the current task After jumping to the task center, the currently executing task line will be displayed directly
Click to view the entry to jump to the task management list, as follows:
View the results
The results are displayed in the form of json
4.5.4 OLAP algorithm tasks
There is no visual OLAP algorithm execution on Hubble. You can call the RESTful API to perform OLAP algorithm tasks, find the corresponding tasks by ID in the task management, and view the progress and results.
4.5.5 Delete metadata, rebuild index
Create a task
In the metadata modeling module, when deleting metadata, an asynchronous task for deleting metadata can be created
When editing an existing vertex/edge type operation, when adding an index, an asynchronous task of creating an index can be created
Task details
After confirming/saving, you can jump to the task center to view the details of the current task
3.4 - HugeGraph-Client Quick Start
1 Overview Of Hugegraph
HugeGraph-Client sends HTTP request to HugeGraph-Server to obtain and parse the execution result of Server. Currently only the HugeGraph-Client for Java is provided. You can use HugeGraph-Client to write Java code to operate HugeGraph, such as adding, deleting, modifying, and querying schema and graph data, or executing gremlin statements.
2 What You Need
Java 11 (also support Java 8)
Maven 3.5+
3 How To Use
The basic steps to use HugeGraph-Client are as follows:
Build a new Maven project by IDEA or Eclipse
Add HugeGraph-Client dependency in pom file;
Create an object to invoke the interface of HugeGraph-Client
See the complete example in the following section for the detail.
$ ./bin/start-hugegraph.sh
Starting HugeGraphServer...
-Connecting to HugeGraphServer (http://127.0.0.1:18080/graphs)...OK
+Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)...OK
Started [pid 21614]
Check out created graphs:
curl http://127.0.0.1:8080/graphs/
@@ -1780,7 +1781,7 @@
Country Code: CN
Export the server certificate based on the server’s private key.
client.truststore is for the client’s use and contains the trusted certificate.
4.5 - HugeGraph-Computer Config
Computer Config Options
config option
default value
description
algorithm.message_class
org.apache.hugegraph.computer.core.config.Null
The class of message passed when compute vertex.
algorithm.params_class
org.apache.hugegraph.computer.core.config.Null
The class used to transfer algorithms’ parameters before algorithm been run.
algorithm.result_class
org.apache.hugegraph.computer.core.config.Null
The class of vertex’s value, the instance is used to store computation result for the vertex.
allocator.max_vertices_per_thread
10000
Maximum number of vertices per thread processed in each memory allocator
bsp.etcd_endpoints
http://localhost:2379
The end points to access etcd.
bsp.log_interval
30000
The log interval(in ms) to print the log while waiting bsp event.
bsp.max_super_step
10
The max super step of the algorithm.
bsp.register_timeout
300000
The max timeout to wait for master and works to register.
bsp.wait_master_timeout
86400000
The max timeout(in ms) to wait for master bsp event.
bsp.wait_workers_timeout
86400000
The max timeout to wait for workers bsp event.
hgkv.max_data_block_size
65536
The max byte size of hgkv-file data block.
hgkv.max_file_size
2147483648
The max number of bytes in each hgkv-file.
hgkv.max_merge_files
10
The max number of files to merge at one time.
hgkv.temp_file_dir
/tmp/hgkv
This folder is used to store temporary files, temporary files will be generated during the file merging process.
hugegraph.name
hugegraph
The graph name to load data and write results back.
hugegraph.url
http://127.0.0.1:8080
The hugegraph url to load data and write results back.
input.edge_direction
OUT
The data of the edge in which direction is loaded, when the value is BOTH, the edges in both OUT and IN direction will be loaded.
input.edge_freq
MULTIPLE
The frequency of edges can exist between a pair of vertices, allowed values: [SINGLE, SINGLE_PER_LABEL, MULTIPLE]. SINGLE means that only one edge can exist between a pair of vertices, use sourceId + targetId to identify it; SINGLE_PER_LABEL means that each edge label can exist one edge between a pair of vertices, use sourceId + edgelabel + targetId to identify it; MULTIPLE means that many edge can exist between a pair of vertices, use sourceId + edgelabel + sortValues + targetId to identify it.
The class to create input-filter object, input-filter is used to Filter vertex edges according to user needs.
input.loader_schema_path
The schema path of loader input, only takes effect when the input.source_type=loader is enabled
input.loader_struct_path
The struct path of loader input, only takes effect when the input.source_type=loader is enabled
input.max_edges_in_one_vertex
200
The maximum number of adjacent edges allowed to be attached to a vertex, the adjacent edges will be stored and transferred together as a batch unit.
input.source_type
hugegraph-server
The source type to load input data, allowed values: [‘hugegraph-server’, ‘hugegraph-loader’], the ‘hugegraph-loader’ means use hugegraph-loader load data from HDFS or file, if use ‘hugegraph-loader’ load data then please config ‘input.loader_struct_path’ and ‘input.loader_schema_path’.
input.split_fetch_timeout
300
The timeout in seconds to fetch input splits
input.split_max_splits
10000000
The maximum number of input splits
input.split_page_size
500
The page size for streamed load input split data
input.split_size
1048576
The input split size in bytes
job.id
local_0001
The job id on Yarn cluster or K8s cluster.
job.partitions_count
1
The partitions count for computing one graph algorithm job.
job.partitions_thread_nums
4
The number of threads for partition parallel compute.
job.workers_count
1
The workers count for computing one graph algorithm job.
The class to output the computation result of each vertex. Be called after iteration computation.
output.result_name
value
The value is assigned dynamically by #name() of instance created by WORKER_COMPUTATION_CLASS.
output.result_write_type
OLAP_COMMON
The result write-type to output to hugegraph, allowed values are: [OLAP_COMMON, OLAP_SECONDARY, OLAP_RANGE].
output.retry_interval
10
The retry interval when output failed
output.retry_times
3
The retry times when output failed
output.single_threads
1
The threads number used to single output
output.thread_pool_shutdown_timeout
60
The timeout seconds of output threads pool shutdown
output.with_adjacent_edges
false
Output the adjacent edges of the vertex or not
output.with_edge_properties
false
Output the properties of the edge or not
output.with_vertex_properties
false
Output the properties of the vertex or not
sort.thread_nums
4
The number of threads performing internal sorting.
transport.client_connect_timeout
3000
The timeout(in ms) of client connect to server.
transport.client_threads
4
The number of transport threads for client.
transport.close_timeout
10000
The timeout(in ms) of close server or close client.
transport.finish_session_timeout
0
The timeout(in ms) to finish session, 0 means using (transport.sync_request_timeout * transport.max_pending_requests).
transport.heartbeat_interval
20000
The minimum interval(in ms) between heartbeats on client side.
transport.io_mode
AUTO
The network IO Mode, either ‘NIO’, ‘EPOLL’, ‘AUTO’, the ‘AUTO’ means selecting the property mode automatically.
transport.max_pending_requests
8
The max number of client unreceived ack, it will trigger the sending unavailable if the number of unreceived ack >= max_pending_requests.
transport.max_syn_backlog
511
The capacity of SYN queue on server side, 0 means using system default value.
transport.max_timeout_heartbeat_count
120
The maximum times of timeout heartbeat on client side, if the number of timeouts waiting for heartbeat response continuously > max_heartbeat_timeouts the channel will be closed from client side.
transport.min_ack_interval
200
The minimum interval(in ms) of server reply ack.
transport.min_pending_requests
6
The minimum number of client unreceived ack, it will trigger the sending available if the number of unreceived ack < min_pending_requests.
transport.network_retries
3
The number of retry attempts for network communication,if network unstable.
The transport provider, currently only supports Netty.
transport.receive_buffer_size
0
The size of socket receive-buffer in bytes, 0 means using system default value.
transport.recv_file_mode
true
Whether enable receive buffer-file mode, it will receive buffer write file from socket by zero-copy if enable.
transport.send_buffer_size
0
The size of socket send-buffer in bytes, 0 means using system default value.
transport.server_host
127.0.0.1
The server hostname or ip to listen on to transfer data.
transport.server_idle_timeout
360000
The max timeout(in ms) of server idle.
transport.server_port
0
The server port to listen on to transfer data. The system will assign a random port if it’s set to 0.
transport.server_threads
4
The number of transport threads for server.
transport.sync_request_timeout
10000
The timeout(in ms) to wait response after sending sync-request.
transport.tcp_keep_alive
true
Whether enable TCP keep-alive.
transport.transport_epoll_lt
false
Whether enable EPOLL level-trigger.
transport.write_buffer_high_mark
67108864
The high water mark for write buffer in bytes, it will trigger the sending unavailable if the number of queued bytes > write_buffer_high_mark.
transport.write_buffer_low_mark
33554432
The low water mark for write buffer in bytes, it will trigger the sending available if the number of queued bytes < write_buffer_low_mark.org.apache.hugegraph.config.OptionChecker$$Lambda$97/0x00000008001c8440@776a6d9b
transport.write_socket_timeout
3000
The timeout(in ms) to write data to socket buffer.
valuefile.max_segment_size
1073741824
The max number of bytes in each segment of value-file.
worker.combiner_class
org.apache.hugegraph.computer.core.config.Null
Combiner can combine messages into one value for a vertex, for example page-rank algorithm can combine messages of a vertex to a sum value.
worker.computation_class
org.apache.hugegraph.computer.core.config.Null
The class to create worker-computation object, worker-computation is used to compute each vertex in each superstep.
worker.data_dirs
[jobs]
The directories separated by ‘,’ that received vertices and messages can persist into.
The partitioner that decides which partition a vertex should be in, and which worker a partition should be in.
worker.received_buffers_bytes_limit
104857600
The limit bytes of buffers of received data, the total size of all buffers can’t excess this limit. If received buffers reach this limit, they will be merged into a file.
The directory where the algorithm jar to upload location.
k8s.kube_config
~/.kube/config
The path of k8s config file.
k8s.log4j_xml_path
The log4j.xml path for computer job.
k8s.namespace
hugegraph-computer-system
The namespace of hugegraph-computer system.
k8s.pull_secret_names
[]
The names of pull-secret for pulling image.
5 - API
5.1 - HugeGraph RESTful API
HugeGraph-Server provides interfaces for clients to operate on graphs based on the HTTP protocol through the HugeGraph-API. These interfaces primarily include the ability to add, delete, modify, and query metadata and graph data, perform traversal algorithms, handle variables, and perform other graph-related operations.
5.1.1 - Schema API
1.1 Schema
HugeGraph provides a single interface to get all Schema information of a graph, including: PropertyKey, VertexLabel, EdgeLabel and IndexLabel.
Method & Url
GET http://localhost:8080/graphs/{graph_name}/schema
+
client.truststore is for the client’s use and contains the trusted certificate.
4.5 - HugeGraph-Computer Config
Computer Config Options
config option
default value
description
algorithm.message_class
org.apache.hugegraph.computer.core.config.Null
The class of message passed when compute vertex.
algorithm.params_class
org.apache.hugegraph.computer.core.config.Null
The class used to transfer algorithms’ parameters before algorithm been run.
algorithm.result_class
org.apache.hugegraph.computer.core.config.Null
The class of vertex’s value, the instance is used to store computation result for the vertex.
allocator.max_vertices_per_thread
10000
Maximum number of vertices per thread processed in each memory allocator
bsp.etcd_endpoints
http://localhost:2379
The end points to access etcd.
bsp.log_interval
30000
The log interval(in ms) to print the log while waiting bsp event.
bsp.max_super_step
10
The max super step of the algorithm.
bsp.register_timeout
300000
The max timeout to wait for master and works to register.
bsp.wait_master_timeout
86400000
The max timeout(in ms) to wait for master bsp event.
bsp.wait_workers_timeout
86400000
The max timeout to wait for workers bsp event.
hgkv.max_data_block_size
65536
The max byte size of hgkv-file data block.
hgkv.max_file_size
2147483648
The max number of bytes in each hgkv-file.
hgkv.max_merge_files
10
The max number of files to merge at one time.
hgkv.temp_file_dir
/tmp/hgkv
This folder is used to store temporary files, temporary files will be generated during the file merging process.
hugegraph.name
hugegraph
The graph name to load data and write results back.
hugegraph.url
http://127.0.0.1:8080
The hugegraph url to load data and write results back.
input.edge_direction
OUT
The data of the edge in which direction is loaded, when the value is BOTH, the edges in both OUT and IN direction will be loaded.
input.edge_freq
MULTIPLE
The frequency of edges can exist between a pair of vertices, allowed values: [SINGLE, SINGLE_PER_LABEL, MULTIPLE]. SINGLE means that only one edge can exist between a pair of vertices, use sourceId + targetId to identify it; SINGLE_PER_LABEL means that each edge label can exist one edge between a pair of vertices, use sourceId + edgelabel + targetId to identify it; MULTIPLE means that many edge can exist between a pair of vertices, use sourceId + edgelabel + sortValues + targetId to identify it.
The class to create input-filter object, input-filter is used to Filter vertex edges according to user needs.
input.loader_schema_path
The schema path of loader input, only takes effect when the input.source_type=loader is enabled
input.loader_struct_path
The struct path of loader input, only takes effect when the input.source_type=loader is enabled
input.max_edges_in_one_vertex
200
The maximum number of adjacent edges allowed to be attached to a vertex, the adjacent edges will be stored and transferred together as a batch unit.
input.source_type
hugegraph-server
The source type to load input data, allowed values: [‘hugegraph-server’, ‘hugegraph-loader’], the ‘hugegraph-loader’ means use hugegraph-loader load data from HDFS or file, if use ‘hugegraph-loader’ load data then please config ‘input.loader_struct_path’ and ‘input.loader_schema_path’.
input.split_fetch_timeout
300
The timeout in seconds to fetch input splits
input.split_max_splits
10000000
The maximum number of input splits
input.split_page_size
500
The page size for streamed load input split data
input.split_size
1048576
The input split size in bytes
job.id
local_0001
The job id on Yarn cluster or K8s cluster.
job.partitions_count
1
The partitions count for computing one graph algorithm job.
job.partitions_thread_nums
4
The number of threads for partition parallel compute.
job.workers_count
1
The workers count for computing one graph algorithm job.
The class to output the computation result of each vertex. Be called after iteration computation.
output.result_name
value
The value is assigned dynamically by #name() of instance created by WORKER_COMPUTATION_CLASS.
output.result_write_type
OLAP_COMMON
The result write-type to output to hugegraph, allowed values are: [OLAP_COMMON, OLAP_SECONDARY, OLAP_RANGE].
output.retry_interval
10
The retry interval when output failed
output.retry_times
3
The retry times when output failed
output.single_threads
1
The threads number used to single output
output.thread_pool_shutdown_timeout
60
The timeout seconds of output threads pool shutdown
output.with_adjacent_edges
false
Output the adjacent edges of the vertex or not
output.with_edge_properties
false
Output the properties of the edge or not
output.with_vertex_properties
false
Output the properties of the vertex or not
sort.thread_nums
4
The number of threads performing internal sorting.
transport.client_connect_timeout
3000
The timeout(in ms) of client connect to server.
transport.client_threads
4
The number of transport threads for client.
transport.close_timeout
10000
The timeout(in ms) of close server or close client.
transport.finish_session_timeout
0
The timeout(in ms) to finish session, 0 means using (transport.sync_request_timeout * transport.max_pending_requests).
transport.heartbeat_interval
20000
The minimum interval(in ms) between heartbeats on client side.
transport.io_mode
AUTO
The network IO Mode, either ‘NIO’, ‘EPOLL’, ‘AUTO’, the ‘AUTO’ means selecting the property mode automatically.
transport.max_pending_requests
8
The max number of client unreceived ack, it will trigger the sending unavailable if the number of unreceived ack >= max_pending_requests.
transport.max_syn_backlog
511
The capacity of SYN queue on server side, 0 means using system default value.
transport.max_timeout_heartbeat_count
120
The maximum times of timeout heartbeat on client side, if the number of timeouts waiting for heartbeat response continuously > max_heartbeat_timeouts the channel will be closed from client side.
transport.min_ack_interval
200
The minimum interval(in ms) of server reply ack.
transport.min_pending_requests
6
The minimum number of client unreceived ack, it will trigger the sending available if the number of unreceived ack < min_pending_requests.
transport.network_retries
3
The number of retry attempts for network communication,if network unstable.
The transport provider, currently only supports Netty.
transport.receive_buffer_size
0
The size of socket receive-buffer in bytes, 0 means using system default value.
transport.recv_file_mode
true
Whether enable receive buffer-file mode, it will receive buffer write file from socket by zero-copy if enable.
transport.send_buffer_size
0
The size of socket send-buffer in bytes, 0 means using system default value.
transport.server_host
127.0.0.1
The server hostname or ip to listen on to transfer data.
transport.server_idle_timeout
360000
The max timeout(in ms) of server idle.
transport.server_port
0
The server port to listen on to transfer data. The system will assign a random port if it’s set to 0.
transport.server_threads
4
The number of transport threads for server.
transport.sync_request_timeout
10000
The timeout(in ms) to wait response after sending sync-request.
transport.tcp_keep_alive
true
Whether enable TCP keep-alive.
transport.transport_epoll_lt
false
Whether enable EPOLL level-trigger.
transport.write_buffer_high_mark
67108864
The high water mark for write buffer in bytes, it will trigger the sending unavailable if the number of queued bytes > write_buffer_high_mark.
transport.write_buffer_low_mark
33554432
The low water mark for write buffer in bytes, it will trigger the sending available if the number of queued bytes < write_buffer_low_mark.org.apache.hugegraph.config.OptionChecker$$Lambda$97/0x00000008001c8440@776a6d9b
transport.write_socket_timeout
3000
The timeout(in ms) to write data to socket buffer.
valuefile.max_segment_size
1073741824
The max number of bytes in each segment of value-file.
worker.combiner_class
org.apache.hugegraph.computer.core.config.Null
Combiner can combine messages into one value for a vertex, for example page-rank algorithm can combine messages of a vertex to a sum value.
worker.computation_class
org.apache.hugegraph.computer.core.config.Null
The class to create worker-computation object, worker-computation is used to compute each vertex in each superstep.
worker.data_dirs
[jobs]
The directories separated by ‘,’ that received vertices and messages can persist into.
The partitioner that decides which partition a vertex should be in, and which worker a partition should be in.
worker.received_buffers_bytes_limit
104857600
The limit bytes of buffers of received data, the total size of all buffers can’t excess this limit. If received buffers reach this limit, they will be merged into a file.
The directory where the algorithm jar to upload location.
k8s.kube_config
~/.kube/config
The path of k8s config file.
k8s.log4j_xml_path
The log4j.xml path for computer job.
k8s.namespace
hugegraph-computer-system
The namespace of hugegraph-computer system.
k8s.pull_secret_names
[]
The names of pull-secret for pulling image.
5 - API
5.1 - HugeGraph RESTful API
HugeGraph-Server provides interfaces for clients to operate on graphs based on the HTTP protocol through the HugeGraph-API. These interfaces primarily include the ability to add, delete, modify, and query metadata and graph data, perform traversal algorithms, handle variables, and perform other graph-related operations.
Expect the doc below, you can also use swagger-ui to visit the RESTful API by localhost:8080/swagger-ui/index.html. Here is an example
5.1.1 - Schema API
1.1 Schema
HugeGraph provides a single interface to get all Schema information of a graph, including: PropertyKey, VertexLabel, EdgeLabel and IndexLabel.
Method & Url
GET http://localhost:8080/graphs/{graph_name}/schema
e.g: GET http://localhost:8080/graphs/hugegraph/schema
Response Status
200
diff --git a/docs/clients/_print/index.html b/docs/clients/_print/index.html
index 67625c601..1aecd001a 100644
--- a/docs/clients/_print/index.html
+++ b/docs/clients/_print/index.html
@@ -1,6 +1,6 @@
API | HugeGraph
This is the multi-page printable view of this section.
-Click here to print.
HugeGraph-Server provides interfaces for clients to operate on graphs based on the HTTP protocol through the HugeGraph-API. These interfaces primarily include the ability to add, delete, modify, and query metadata and graph data, perform traversal algorithms, handle variables, and perform other graph-related operations.
1.1 - Schema API
1.1 Schema
HugeGraph provides a single interface to get all Schema information of a graph, including: PropertyKey, VertexLabel, EdgeLabel and IndexLabel.
HugeGraph-Server provides interfaces for clients to operate on graphs based on the HTTP protocol through the HugeGraph-API. These interfaces primarily include the ability to add, delete, modify, and query metadata and graph data, perform traversal algorithms, handle variables, and perform other graph-related operations.
Expect the doc below, you can also use swagger-ui to visit the RESTful API by localhost:8080/swagger-ui/index.html. Here is an example
1.1 - Schema API
1.1 Schema
HugeGraph provides a single interface to get all Schema information of a graph, including: PropertyKey, VertexLabel, EdgeLabel and IndexLabel.
Method & Url
GET http://localhost:8080/graphs/{graph_name}/schema
e.g: GET http://localhost:8080/graphs/hugegraph/schema
Response Status
200
diff --git a/docs/clients/index.xml b/docs/clients/index.xml
index 2d8607e27..6378ae9a3 100644
--- a/docs/clients/index.xml
+++ b/docs/clients/index.xml
@@ -1,5 +1,6 @@
HugeGraph – API/docs/clients/Recent content in API on HugeGraphHugo -- gohugo.ioDocs: HugeGraph RESTful API/docs/clients/restful-api/Mon, 01 Jan 0001 00:00:00 +0000/docs/clients/restful-api/
-<p>HugeGraph-Server provides interfaces for clients to operate on graphs based on the HTTP protocol through the HugeGraph-API. These interfaces primarily include the ability to add, delete, modify, and query metadata and graph data, perform traversal algorithms, handle variables, and perform other graph-related operations.</p>Docs: HugeGraph Java Client/docs/clients/hugegraph-client/Mon, 01 Jan 0001 00:00:00 +0000/docs/clients/hugegraph-client/
+<p>HugeGraph-Server provides interfaces for clients to operate on graphs based on the HTTP protocol through the HugeGraph-API. These interfaces primarily include the ability to add, delete, modify, and query metadata and graph data, perform traversal algorithms, handle variables, and perform other graph-related operations.</p>
+<p>Expect the doc below, you can also use <code>swagger-ui</code> to visit the <code>RESTful API</code> by <code>localhost:8080/swagger-ui/index.html</code>. <a href="/docs/quickstart/hugegraph-server#swaggerui-example">Here is an example</a></p>Docs: HugeGraph Java Client/docs/clients/hugegraph-client/Mon, 01 Jan 0001 00:00:00 +0000/docs/clients/hugegraph-client/
<p>The code in this document is written in <code>java</code>, but its style is very similar to <code>gremlin(groovy)</code>. The user only needs to replace the variable declaration in the code with <code>def</code> or remove it directly,
You can convert <code>java</code> code into <code>groovy</code>; in addition, each line of statement can be without a semicolon at the end, <code>groovy</code> considers a line to be a statement.
The <code>gremlin(groovy)</code> written by the user in <code>HugeGraph-Studio</code> can refer to the <code>java</code> code in this document, and some examples will be given below.</p>
diff --git a/docs/clients/restful-api/_print/index.html b/docs/clients/restful-api/_print/index.html
index ee787f6cb..6ac323fa5 100644
--- a/docs/clients/restful-api/_print/index.html
+++ b/docs/clients/restful-api/_print/index.html
@@ -1,6 +1,6 @@
HugeGraph RESTful API | HugeGraph
This is the multi-page printable view of this section.
-Click here to print.
HugeGraph-Server provides interfaces for clients to operate on graphs based on the HTTP protocol through the HugeGraph-API. These interfaces primarily include the ability to add, delete, modify, and query metadata and graph data, perform traversal algorithms, handle variables, and perform other graph-related operations.
1 - Schema API
1.1 Schema
HugeGraph provides a single interface to get all Schema information of a graph, including: PropertyKey, VertexLabel, EdgeLabel and IndexLabel.
HugeGraph-Server provides interfaces for clients to operate on graphs based on the HTTP protocol through the HugeGraph-API. These interfaces primarily include the ability to add, delete, modify, and query metadata and graph data, perform traversal algorithms, handle variables, and perform other graph-related operations.
Expect the doc below, you can also use swagger-ui to visit the RESTful API by localhost:8080/swagger-ui/index.html. Here is an example
1 - Schema API
1.1 Schema
HugeGraph provides a single interface to get all Schema information of a graph, including: PropertyKey, VertexLabel, EdgeLabel and IndexLabel.
Method & Url
GET http://localhost:8080/graphs/{graph_name}/schema
e.g: GET http://localhost:8080/graphs/hugegraph/schema
HugeGraph-Server provides interfaces for clients to operate on graphs based on the HTTP protocol through the HugeGraph-API. These interfaces primarily include the ability to add, delete, modify, and query metadata and graph data, perform traversal algorithms, handle variables, and perform other graph-related operations.
HugeGraph-Server provides interfaces for clients to operate on graphs based on the HTTP protocol through the HugeGraph-API. These interfaces primarily include the ability to add, delete, modify, and query metadata and graph data, perform traversal algorithms, handle variables, and perform other graph-related operations.
Expect the doc below, you can also use swagger-ui to visit the RESTful API by localhost:8080/swagger-ui/index.html. Here is an example
$ ./bin/start-hugegraph.sh
Starting HugeGraphServer...
-Connecting to HugeGraphServer (http://127.0.0.1:18080/graphs)...OK
+Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)...OK
Started [pid 21614]
Check out created graphs:
curl http://127.0.0.1:8080/graphs/
diff --git a/docs/config/config-guide/index.html b/docs/config/config-guide/index.html
index e615d751d..7119dafae 100644
--- a/docs/config/config-guide/index.html
+++ b/docs/config/config-guide/index.html
@@ -2,10 +2,10 @@
The directory for the configuration files is hugegraph-release/conf, and all the configurations related to the service and the graph itself …">
@@ -260,7 +260,7 @@
$ ./bin/start-hugegraph.sh
Starting HugeGraphServer...
-Connecting to HugeGraphServer (http://127.0.0.1:18080/graphs)...OK
+Connecting to HugeGraphServer (http://127.0.0.1:8080/graphs)...OK
Started [pid 21614]
diff --git a/docs/config/index.xml b/docs/config/index.xml
index c02090adf..3ee1d1701 100644
--- a/docs/config/index.xml
+++ b/docs/config/index.xml
@@ -302,7 +302,7 @@
</span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>$ ./bin/start-hugegraph.sh
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>Starting HugeGraphServer...
-</span></span><span style="display:flex;"><span>Connecting to HugeGraphServer <span style="color:#ce5c00;font-weight:bold">(</span>http://127.0.0.1:18080/graphs<span style="color:#ce5c00;font-weight:bold">)</span>...OK
+</span></span><span style="display:flex;"><span>Connecting to HugeGraphServer <span style="color:#ce5c00;font-weight:bold">(</span>http://127.0.0.1:8080/graphs<span style="color:#ce5c00;font-weight:bold">)</span>...OK
</span></span><span style="display:flex;"><span>Started <span style="color:#ce5c00;font-weight:bold">[</span>pid 21614<span style="color:#ce5c00;font-weight:bold">]</span>
</span></span></code></pre></div><p>Check out created graphs:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>curl http://127.0.0.1:8080/graphs/
diff --git "a/docs/images/images-server/621swaggerui\347\244\272\344\276\213.png" "b/docs/images/images-server/621swaggerui\347\244\272\344\276\213.png"
new file mode 100644
index 000000000..87a154818
Binary files /dev/null and "b/docs/images/images-server/621swaggerui\347\244\272\344\276\213.png" differ
diff --git a/docs/index.xml b/docs/index.xml
index d74c359d8..8087b391d 100644
--- a/docs/index.xml
+++ b/docs/index.xml
@@ -332,7 +332,7 @@
</span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>$ ./bin/start-hugegraph.sh
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>Starting HugeGraphServer...
-</span></span><span style="display:flex;"><span>Connecting to HugeGraphServer <span style="color:#ce5c00;font-weight:bold">(</span>http://127.0.0.1:18080/graphs<span style="color:#ce5c00;font-weight:bold">)</span>...OK
+</span></span><span style="display:flex;"><span>Connecting to HugeGraphServer <span style="color:#ce5c00;font-weight:bold">(</span>http://127.0.0.1:8080/graphs<span style="color:#ce5c00;font-weight:bold">)</span>...OK
</span></span><span style="display:flex;"><span>Started <span style="color:#ce5c00;font-weight:bold">[</span>pid 21614<span style="color:#ce5c00;font-weight:bold">]</span>
</span></span></code></pre></div><p>Check out created graphs:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span>curl http://127.0.0.1:8080/graphs/
@@ -1441,17 +1441,18 @@
<p>Optional:</p>
<ol>
<li>use <code>docker exec -it graph bash</code> to enter the container to do some operations.</li>
-<li>use <code>docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph</code> to start with a <strong>built-in</strong> example graph.</li>
+<li>use <code>docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph</code> to start with a <strong>built-in</strong> example graph. We can use <code>RESTful API</code> to verify the result. The detailed step can refer to <a href="http://127.0.0.1:1313/docs/quickstart/hugegraph-server/#511-create-example-graph-when-starting-server">5.1.1</a></li>
</ol>
-<p>Also, we can use <code>docker-compose</code> to deploy, with <code>docker-compose up -d</code>. Here is an example <code>docker-compose.yml</code>:</p>
+<p>Also, if we want to manage the other Hugegraph related instances in one file, we can use <code>docker-compose</code> to deploy, with the command <code>docker-compose up -d</code> (you can config only <code>server</code>). Here is an example <code>docker-compose.yml</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#204a87;font-weight:bold">version</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#4e9a06">'3'</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"></span><span style="color:#204a87;font-weight:bold">services</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">graph</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">image</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">hugegraph/hugegraph</span><span style="color:#f8f8f8;text-decoration:underline">
-</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic">#environment:</span><span style="color:#f8f8f8;text-decoration:underline">
+</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic"># environment:</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic"># - PRELOAD=true</span><span style="color:#f8f8f8;text-decoration:underline">
+</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic"># PRELOAD is a option to preload a build-in sample graph when initializing.</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">ports</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline">
-</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">18080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline">
+</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span></code></pre></div><h4 id="32-download-the-binary-tar-tarball">3.2 Download the binary tar tarball</h4>
<p>You could download the binary tarball from the download page of ASF site like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#8f5902;font-style:italic"># use the latest version, here is 1.0.0 for example</span>
@@ -1530,11 +1531,11 @@ for detailed configuration introduction, please refer to <a href="/docs/confi
<ol>
<li>
<p>Use <code>docker run</code></p>
-<p>Use <code>docker run -itd --name=graph -p 18080:8080 -e PRELOAD=true hugegraph/hugegraph:latest</code></p>
+<p>Use <code>docker run -itd --name=graph -p 8080:8080 -e PRELOAD=true hugegraph/hugegraph:latest</code></p>
</li>
<li>
<p>Use <code>docker-compose</code></p>
-<p>Create <code>docker-compose.yml</code> as following</p>
+<p>Create <code>docker-compose.yml</code> as following. We should set the environment variable <code>PRELOAD=true</code>. <a href="https://github.com/apache/incubator-hugegraph/blob/master/hugegraph-dist/src/assembly/static/scripts/example.groovy"><code>example.groovy</code></a> is a predefined script to preload the sample data. If needed, we can mount a new <code>example.groovy</code> to change the preload data.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#204a87;font-weight:bold">version</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#4e9a06">'3'</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">services</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">graph</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline">
@@ -1543,7 +1544,7 @@ for detailed configuration introduction, please refer to <a href="/docs/confi
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">environment</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#000">PRELOAD=true</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">ports</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline">
-</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">18080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline">
+</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span></code></pre></div><p>Use <code>docker-compose up -d</code> to start the container</p>
</li>
</ol>
@@ -1585,7 +1586,7 @@ after the service is stopped artificially, or when the service needs to be start
</span></span><span style="display:flex;"><span>serializer=binary
</span></span><span style="display:flex;"><span>rocksdb.data_path=.
</span></span><span style="display:flex;"><span>rocksdb.wal_path=.
-</span></span></code></pre></div><p>Initialize the database (required only on first startup)</p>
+</span></span></code></pre></div><p>Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span>
</span></span><span style="display:flex;"><span>bin/init-store.sh
</span></span></code></pre></div><p>Start server</p>
@@ -1613,7 +1614,7 @@ after the service is stopped artificially, or when the service needs to be start
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>#cassandra.keyspace.strategy=SimpleStrategy
</span></span><span style="display:flex;"><span>#cassandra.keyspace.replication=3
-</span></span></code></pre></div><p>Initialize the database (required only on first startup)</p>
+</span></span></code></pre></div><p>Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span>
</span></span><span style="display:flex;"><span>bin/init-store.sh
</span></span><span style="display:flex;"><span>Initing HugeGraph Store...
@@ -1659,7 +1660,7 @@ after the service is stopped artificially, or when the service needs to be start
</span></span><span style="display:flex;"><span>#cassandra.keyspace.strategy=SimpleStrategy
</span></span><span style="display:flex;"><span>#cassandra.keyspace.replication=3
</span></span></code></pre></div><p>Since the scylladb database itself is an “optimized version” based on cassandra, if the user does not have scylladb installed, they can also use cassandra as the backend storage directly. They only need to change the backend and serializer to scylladb, and the host and post point to the seeds and port of the cassandra cluster. Yes, but it is not recommended to do so, it will not take advantage of scylladb itself.</p>
-<p>Initialize the database (required only on first startup)</p>
+<p>Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span>
</span></span><span style="display:flex;"><span>bin/init-store.sh
</span></span></code></pre></div><p>Start server</p>
@@ -1685,7 +1686,7 @@ after the service is stopped artificially, or when the service needs to be start
</span></span><span style="display:flex;"><span>#hbase.enable_partition=true
</span></span><span style="display:flex;"><span>#hbase.vertex_partitions=10
</span></span><span style="display:flex;"><span>#hbase.edge_partitions=30
-</span></span></code></pre></div><p>Initialize the database (required only on first startup)</p>
+</span></span></code></pre></div><p>Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span>
</span></span><span style="display:flex;"><span>bin/init-store.sh
</span></span></code></pre></div><p>Start server</p>
@@ -1717,7 +1718,7 @@ after the service is stopped artificially, or when the service needs to be start
</span></span><span style="display:flex;"><span>jdbc.reconnect_max_times=3
</span></span><span style="display:flex;"><span>jdbc.reconnect_interval=3
</span></span><span style="display:flex;"><span>jdbc.ssl_mode=false
-</span></span></code></pre></div><p>Initialize the database (required only on first startup)</p>
+</span></span></code></pre></div><p>Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span>
</span></span><span style="display:flex;"><span>bin/init-store.sh
</span></span></code></pre></div><p>Start server</p>
@@ -1817,7 +1818,12 @@ after the service is stopped artificially, or when the service needs to be start
</span></span><span style="display:flex;"><span> ...
</span></span><span style="display:flex;"><span> ]
</span></span><span style="display:flex;"><span>}
-</span></span></code></pre></div><p>For detailed API, please refer to <a href="/docs/clients/restful-api">RESTful-API</a></p>
+</span></span></code></pre></div><p id="swaggerui-example"></p>
+<p>For detailed API, please refer to <a href="/docs/clients/restful-api">RESTful-API</a></p>
+<p>You can also visit <code>localhost:8080/swagger-ui/index.html</code> to check the API.</p>
+<div style="text-align: center;">
+<img src="/docs/images/images-server/621swaggerui示例.png" alt="image">
+</div>
<h3 id="7-stop-server">7 Stop Server</h3>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#000">$cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span>
</span></span><span style="display:flex;"><span><span style="color:#000">$bin</span>/stop-hugegraph.sh
@@ -7342,7 +7348,8 @@ target directory. Copy the Jar package to the <code>plugins</code> directo
</ul>
<h4 id="21-use-docker-recommended">2.1 Use docker (recommended)</h4>
<blockquote>
-<p><strong>Special Note</strong>: If you are starting <code>hubble</code> with Docker, and <code>hubble</code> and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to <code>localhost/127.0.0.1</code>. This will refer to the <code>hubble</code> container internally rather than the host machine, resulting in a connection failure to the server. If <code>hubble</code> and <code>server</code> is in the same docker network, you can use the <code>container_name</code> as the hostname, and <code>8080</code> as the port. Or you can use the ip of the host as the hostname, and the port is configured by the host for the server.</p>
+<p><strong>Special Note</strong>: If you are starting <code>hubble</code> with Docker, and <code>hubble</code> and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to <code>localhost/127.0.0.1</code>. This will refer to the <code>hubble</code> container internally rather than the host machine, resulting in a connection failure to the server.</p>
+<p>If <code>hubble</code> and <code>server</code> is in the same docker network, we <strong>recommend</strong> using the <code>container_name</code> (in our example, it is <code>graph</code>) as the hostname, and <code>8080</code> as the port. Or you can use the <strong>host IP</strong> as the hostname, and the port is configured by the host for the server.</p>
</blockquote>
<p>We can use <code>docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble</code> to quick start <a href="https://hub.docker.com/r/hugegraph/hubble">hubble</a>.</p>
<p>Alternatively, you can use Docker Compose to start <code>hubble</code>. Additionally, if <code>hubble</code> and the graph are in the same Docker network, you can access the graph using the container name of the graph, eliminating the need for the host machine’s IP address.</p>
@@ -7353,7 +7360,7 @@ target directory. Copy the Jar package to the <code>plugins</code> directo
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">image</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">hugegraph/hugegraph</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">container_name</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">graph</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">ports</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline">
-</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">18080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline">
+</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">hubble</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">image</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">hugegraph/hubble</span><span style="color:#f8f8f8;text-decoration:underline">
@@ -7408,10 +7415,13 @@ target directory. Copy the Jar package to the <code>plugins</code> directo
<div style="text-align: center;">
<img src="/docs/images/images-hubble/311图创建.png" alt="image">
</div>
-<p>Create graph by filling in the content as follows::</p>
+<p>Create graph by filling in the content as follows:</p>
<center>
<img src="/docs/images/images-hubble/311图创建2.png" alt="image">
</center>
+<blockquote>
+<p><strong>Special Note</strong>: If you are starting <code>hubble</code> with Docker, and <code>hubble</code> and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to <code>localhost/127.0.0.1</code>. If <code>hubble</code> and <code>server</code> is in the same docker network, we <strong>recommend</strong> using the <code>container_name</code> (in our example, it is <code>graph</code>) as the hostname, and <code>8080</code> as the port. Or you can use the <strong>host IP</strong> as the hostname, and the port is configured by the host for the server.</p>
+</blockquote>
<h5 id="412graph-access">4.1.2 Graph Access</h5>
<p>Realize the information access of the graph space. After entering, you can perform operations such as multidimensional query analysis, metadata management, data import, and algorithm analysis of the graph.</p>
<center>
@@ -7501,7 +7511,7 @@ target directory. Copy the Jar package to the <code>plugins</code> directo
<center>
<img src="/docs/images/images-hubble/3241边创建.png" alt="image">
</center>
-<p>Graph mode:</p>
+<p>Graph mode:</p>
<center>
<img src="/docs/images/images-hubble/3241边创建2.png" alt="image">
</center>
@@ -7518,6 +7528,9 @@ target directory. Copy the Jar package to the <code>plugins</code> directo
<h5 id="425-index-types">4.2.5 Index Types</h5>
<p>Displays vertex and edge indices for vertex types and edge types.</p>
<h4 id="43-data-import">4.3 Data Import</h4>
+<blockquote>
+<p><strong>Note</strong>:currently, we recommend to use <a href="/en/docs/quickstart/hugegraph-loader">hugegraph-loader</a> to import data formally. The built-in import of <code>hubble</code> is used for <strong>testing</strong> and <strong>getting started</strong>.</p>
+</blockquote>
<p>The usage process of data import is as follows:</p>
<center>
<img src="/docs/images/images-hubble/33导入流程.png" alt="image">
diff --git a/docs/quickstart/_print/index.html b/docs/quickstart/_print/index.html
index 078ad711a..275b50155 100644
--- a/docs/quickstart/_print/index.html
+++ b/docs/quickstart/_print/index.html
@@ -1,13 +1,14 @@
Quick Start | HugeGraph
This is the multi-page printable view of this section.
-Click here to print.
HugeGraph-Server is the core part of the HugeGraph Project, contains submodules such as Core, Backend, API.
The Core Module is an implementation of the Tinkerpop interface; The Backend module is used to save the graph data to the data store, currently supported backends include: Memory, Cassandra, ScyllaDB, RocksDB; The API Module provides HTTP Server, which converts Client’s HTTP request into a call to Core Module.
There will be two spellings HugeGraph-Server and HugeGraphServer in the document, and other modules are similar. There is no big difference in the meaning of these two ways of writing, which can be distinguished as follows: HugeGraph-Server represents the code of server-related components, HugeGraphServer represents the service process.
2 Dependency for Building/Running
2.1 Install Java 11 (JDK 11)
Consider use Java 11 to run HugeGraph-Server (also compatible with Java 8 now), and configure by yourself.
Be sure to execute the java -version command to check the jdk version before reading
3 Deploy
There are four ways to deploy HugeGraph-Server components:
HugeGraph-Server is the core part of the HugeGraph Project, contains submodules such as Core, Backend, API.
The Core Module is an implementation of the Tinkerpop interface; The Backend module is used to save the graph data to the data store, currently supported backends include: Memory, Cassandra, ScyllaDB, RocksDB; The API Module provides HTTP Server, which converts Client’s HTTP request into a call to Core Module.
There will be two spellings HugeGraph-Server and HugeGraphServer in the document, and other modules are similar. There is no big difference in the meaning of these two ways of writing, which can be distinguished as follows: HugeGraph-Server represents the code of server-related components, HugeGraphServer represents the service process.
2 Dependency for Building/Running
2.1 Install Java 11 (JDK 11)
Consider use Java 11 to run HugeGraph-Server (also compatible with Java 8 now), and configure by yourself.
Be sure to execute the java -version command to check the jdk version before reading
3 Deploy
There are four ways to deploy HugeGraph-Server components:
We can use docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph to quickly start an inner HugeGraph server with RocksDB in background.
Optional:
use docker exec -it graph bash to enter the container to do some operations.
use docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph to start with a built-in example graph. We can use RESTful API to verify the result. The detailed step can refer to 5.1.1
Also, if we want to manage the other Hugegraph related instances in one file, we can use docker-compose to deploy, with the command docker-compose up -d (you can config only server). Here is an example docker-compose.yml:
version:'3'services:graph:image:hugegraph/hugegraph
-#environment:
+# environment:# - PRELOAD=true
+# PRELOAD is a option to preload a build-in sample graph when initializing.ports:
-- 18080:8080
+- 8080:8080
3.2 Download the binary tar tarball
You could download the binary tarball from the download page of ASF site like this:
# use the latest version, here is 1.0.0 for examplewget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz
tar zxf *hugegraph*.tar.gz
@@ -56,7 +57,7 @@
cd *hugegraph*/*tool*
note: ${version} is the version, The latest version can refer to Download Page, or click the link to download directly from the Download page
The general entry script for HugeGraph-Tools is bin/hugegraph, Users can use the help command to view its usage, here only the commands for one-click deployment are introduced.
{hugegraph-version} indicates the version of HugeGraphServer and HugeGraphStudio to be deployed, users can view the conf/version-mapping.yaml file for version information, {install-path} specify the installation directory of HugeGraphServer and HugeGraphStudio, {download-path-prefix} optional, specify the download address of HugeGraphServer and HugeGraphStudio tarball, use default download URL if not provided, for example, to start HugeGraph-Server and HugeGraphStudio version 0.6, write the above command as bin/hugegraph deploy -v 0.6 -p services.
4 Config
If you need to quickly start HugeGraph just for testing, then you only need to modify a few configuration items (see next section).
-for detailed configuration introduction, please refer to configuration document and introduction to configuration items
5 Startup
5.1 Use Docker to startup
In 3.1 Use Docker container, we have introduced how to use docker to deploy hugegraph-server. server can also preload an example graph by setting the parameter.
5.1.1 Create example graph when starting server
Set the environment variable PRELOAD=true when starting Docker in order to load data during the execution of the startup script.
Use docker run
Use docker run -itd --name=graph -p 18080:8080 -e PRELOAD=true hugegraph/hugegraph:latest
In 3.1 Use Docker container, we have introduced how to use docker to deploy hugegraph-server. server can also preload an example graph by setting the parameter.
5.1.1 Create example graph when starting server
Set the environment variable PRELOAD=true when starting Docker in order to load data during the execution of the startup script.
Use docker run
Use docker run -itd --name=graph -p 8080:8080 -e PRELOAD=true hugegraph/hugegraph:latest
Use docker-compose
Create docker-compose.yml as following. We should set the environment variable PRELOAD=true. example.groovy is a predefined script to preload the sample data. If needed, we can mount a new example.groovy to change the preload data.
Since the scylladb database itself is an “optimized version” based on cassandra, if the user does not have scylladb installed, they can also use cassandra as the backend storage directly. They only need to change the backend and serializer to scylladb, and the host and post point to the seeds and port of the cassandra cluster. Yes, but it is not recommended to do so, it will not take advantage of scylladb itself.
Initialize the database (required only on first startup)
cd *hugegraph-${version}
+
Since the scylladb database itself is an “optimized version” based on cassandra, if the user does not have scylladb installed, they can also use cassandra as the backend storage directly. They only need to change the backend and serializer to scylladb, and the host and post point to the seeds and port of the cassandra cluster. Yes, but it is not recommended to do so, it will not take advantage of scylladb itself.
Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)
HugeGraph-Loader is the data import component of HugeGraph, which can convert data from various data sources into graph vertices and edges and import them into the graph database in batches.
Currently supported data sources include:
Local disk file or directory, supports TEXT, CSV and JSON format files, supports compressed files
HDFS file or directory, supports compressed files
Mainstream relational databases, such as MySQL, PostgreSQL, Oracle, SQL Server
Local disk files and HDFS files support resumable uploads.
It will be explained in detail below.
Note: HugeGraph-Loader requires HugeGraph Server service, please refer to HugeGraph-Server Quick Start to download and start Server
2 Get HugeGraph-Loader
There are two ways to get HugeGraph-Loader:
Download the compiled tarball
Clone source code then compile and install
2.1 Download the compiled archive
Download the latest version of the HugeGraph-Toolchain release package:
HugeGraph is an analysis-oriented graph database system that supports batch operations, which fully supports Apache TinkerPop3 framework and Gremlin graph query language. It provides a complete tool chain ecology such as export, backup, and recovery, and effectively solve the storage, query and correlation analysis needs of massive graph data. HugeGraph is widely used in the fields of risk control, insurance claims, recommendation search, public security crime crackdown, knowledge graph, network security, IT operation and maintenance of bank securities companies, and is committed to allowing more industries, organizations and users to enjoy a wider range of data comprehensive value.
HugeGraph-Hubble is HugeGraph’s one-stop visual analysis platform. The platform covers the whole process from data modeling, to efficient data import, to real-time and offline analysis of data, and unified management of graphs, realizing the whole process wizard of graph application. It is designed to improve the user’s use fluency, lower the user’s use threshold, and provide a more efficient and easy-to-use user experience.
The platform mainly includes the following modules:
Graph Management
The graph management module realizes the unified management of multiple graphs and graph access, editing, deletion, and query by creating graph and connecting the platform and graph data.
Metadata Modeling
The metadata modeling module realizes the construction and management of graph models by creating attribute libraries, vertex types, edge types, and index types. The platform provides two modes, list mode and graph mode, which can display the metadata model in real time, which is more intuitive. At the same time, it also provides a metadata reuse function across graphs, which saves the tedious and repetitive creation process of the same metadata, greatly improves modeling efficiency and enhances ease of use.
Data Import
Data import is to convert the user’s business data into the vertices and edges of the graph and insert it into the graph database. The platform provides a wizard-style visual import module. By creating import tasks, the management of import tasks and the parallel operation of multiple import tasks are realized. Improve import performance. After entering the import task, you only need to follow the platform step prompts, upload files as needed, and fill in the content to easily implement the import process of graph data. At the same time, it supports breakpoint resuming, error retry mechanism, etc., which reduces import costs and improves efficiency.
Graph Analysis
By inputting the graph traversal language Gremlin, high-performance general analysis of graph data can be realized, and functions such as customized multidimensional path query of vertices can be provided, and three kinds of graph result display methods are provided, including: graph form, table form, Json form, and multidimensional display. The data form meets the needs of various scenarios used by users. It provides functions such as running records and collection of common statements, realizing the traceability of graph operations, and the reuse and sharing of query input, which is fast and efficient. It supports the export of graph data, and the export format is Json format.
Task Management
For Gremlin tasks that need to traverse the whole graph, index creation and reconstruction and other time-consuming asynchronous tasks, the platform provides corresponding task management functions to achieve unified management and result viewing of asynchronous tasks.
2 Deploy
There are three ways to deplot hugegraph-hubble
Use Docker (recommended)
Download the Toolchain binary package
Source code compilation
2.1 Use docker (recommended)
Special Note: If you are starting hubble with Docker, and hubble and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to localhost/127.0.0.1. This will refer to the hubble container internally rather than the host machine, resulting in a connection failure to the server. If hubble and server is in the same docker network, you can use the container_name as the hostname, and 8080 as the port. Or you can use the ip of the host as the hostname, and the port is configured by the host for the server.
We can use docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble to quick start hubble.
Alternatively, you can use Docker Compose to start hubble. Additionally, if hubble and the graph are in the same Docker network, you can access the graph using the container name of the graph, eliminating the need for the host machine’s IP address.
Use docker-compose up -d,docker-compose.yml is following:
version:'3'
+
3 - HugeGraph-Hubble Quick Start
1 HugeGraph-Hubble Overview
HugeGraph is an analysis-oriented graph database system that supports batch operations, which fully supports Apache TinkerPop3 framework and Gremlin graph query language. It provides a complete tool chain ecology such as export, backup, and recovery, and effectively solve the storage, query and correlation analysis needs of massive graph data. HugeGraph is widely used in the fields of risk control, insurance claims, recommendation search, public security crime crackdown, knowledge graph, network security, IT operation and maintenance of bank securities companies, and is committed to allowing more industries, organizations and users to enjoy a wider range of data comprehensive value.
HugeGraph-Hubble is HugeGraph’s one-stop visual analysis platform. The platform covers the whole process from data modeling, to efficient data import, to real-time and offline analysis of data, and unified management of graphs, realizing the whole process wizard of graph application. It is designed to improve the user’s use fluency, lower the user’s use threshold, and provide a more efficient and easy-to-use user experience.
The platform mainly includes the following modules:
Graph Management
The graph management module realizes the unified management of multiple graphs and graph access, editing, deletion, and query by creating graph and connecting the platform and graph data.
Metadata Modeling
The metadata modeling module realizes the construction and management of graph models by creating attribute libraries, vertex types, edge types, and index types. The platform provides two modes, list mode and graph mode, which can display the metadata model in real time, which is more intuitive. At the same time, it also provides a metadata reuse function across graphs, which saves the tedious and repetitive creation process of the same metadata, greatly improves modeling efficiency and enhances ease of use.
Data Import
Data import is to convert the user’s business data into the vertices and edges of the graph and insert it into the graph database. The platform provides a wizard-style visual import module. By creating import tasks, the management of import tasks and the parallel operation of multiple import tasks are realized. Improve import performance. After entering the import task, you only need to follow the platform step prompts, upload files as needed, and fill in the content to easily implement the import process of graph data. At the same time, it supports breakpoint resuming, error retry mechanism, etc., which reduces import costs and improves efficiency.
Graph Analysis
By inputting the graph traversal language Gremlin, high-performance general analysis of graph data can be realized, and functions such as customized multidimensional path query of vertices can be provided, and three kinds of graph result display methods are provided, including: graph form, table form, Json form, and multidimensional display. The data form meets the needs of various scenarios used by users. It provides functions such as running records and collection of common statements, realizing the traceability of graph operations, and the reuse and sharing of query input, which is fast and efficient. It supports the export of graph data, and the export format is Json format.
Task Management
For Gremlin tasks that need to traverse the whole graph, index creation and reconstruction and other time-consuming asynchronous tasks, the platform provides corresponding task management functions to achieve unified management and result viewing of asynchronous tasks.
2 Deploy
There are three ways to deplot hugegraph-hubble
Use Docker (recommended)
Download the Toolchain binary package
Source code compilation
2.1 Use docker (recommended)
Special Note: If you are starting hubble with Docker, and hubble and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to localhost/127.0.0.1. This will refer to the hubble container internally rather than the host machine, resulting in a connection failure to the server.
If hubble and server is in the same docker network, we recommend using the container_name (in our example, it is graph) as the hostname, and 8080 as the port. Or you can use the host IP as the hostname, and the port is configured by the host for the server.
We can use docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble to quick start hubble.
Alternatively, you can use Docker Compose to start hubble. Additionally, if hubble and the graph are in the same Docker network, you can access the graph using the container name of the graph, eliminating the need for the host machine’s IP address.
Use docker-compose up -d,docker-compose.yml is following:
The module usage process of the platform is as follows:
4 Platform Instructions
4.1 Graph Management
4.1.1 Graph creation
Under the graph management module, click [Create graph], and realize the connection of multiple graphs by filling in the graph ID, graph name, host name, port number, username, and password information.
Create graph by filling in the content as follows::
4.1.2 Graph Access
Realize the information access of the graph space. After entering, you can perform operations such as multidimensional query analysis, metadata management, data import, and algorithm analysis of the graph.
4.1.3 Graph management
Users can achieve unified management of graphs through overview, search, and information editing and deletion of single graphs.
Search range: You can search for the graph name and ID.
4.2 Metadata Modeling (list + graph mode)
4.2.1 Module entry
Left navigation:
4.2.2 Property type
4.2.2.1 Create type
Fill in or select the attribute name, data type, and cardinality to complete the creation of the attribute.
Created attributes can be used as attributes of vertex type and edge type.
List mode:
Graph mode:
4.2.2.2 Reuse
The platform provides the [Reuse] function, which can directly reuse the metadata of other graphs.
Select the graph ID that needs to be reused, and continue to select the attributes that need to be reused. After that, the platform will check whether there is a conflict. After passing, the metadata can be reused.
Select reuse items:
Check reuse items:
4.2.2.3 Management
You can delete a single item or delete it in batches in the attribute list.
4.2.3 Vertex type
4.2.3.1 Create type
Fill in or select the vertex type name, ID strategy, association attribute, primary key attribute, vertex style, content displayed below the vertex in the query result, and index information: including whether to create a type index, and the specific content of the attribute index, complete the vertex Type creation.
List mode:
Graph mode:
4.2.3.2 Reuse
The multiplexing of vertex types will reuse the attributes and attribute indexes associated with this type together.
The reuse method is similar to the property reuse, see 3.2.2.2.
4.2.3.3 Administration
Editing operations are available. The vertex style, association type, vertex display content, and attribute index can be edited, and the rest cannot be edited.
You can delete a single item or delete it in batches.
4.2.4 Edge Types
4.2.4.1 Create
Fill in or select the edge type name, start point type, end point type, associated attributes, whether to allow multiple connections, edge style, content displayed below the edge in the query result, and index information: including whether to create a type index, and attribute index The specific content, complete the creation of the edge type.
List mode:
Graph mode:
4.2.4.2 Reuse
The reuse of the edge type will reuse the start point type, end point type, associated attribute and attribute index of this type.
The reuse method is similar to the property reuse, see 3.2.2.2.
4.2.4.3 Administration
Editing operations are available. Edge styles, associated attributes, edge display content, and attribute indexes can be edited, and the rest cannot be edited, the same as the vertex type.
You can delete a single item or delete it in batches.
4.2.5 Index Types
Displays vertex and edge indices for vertex types and edge types.
4.3 Data Import
The usage process of data import is as follows:
4.3.1 Module entrance
Left navigation:
4.3.2 Create task
Fill in the task name and remarks (optional) to create an import task.
Multiple import tasks can be created and imported in parallel.
4.3.3 Uploading files
Upload the file that needs to be composed. The currently supported format is CSV, which will be updated continuously in the future.
Multiple files can be uploaded at the same time.
4.3.4 Setting up data mapping
Set up data mapping for uploaded files, including file settings and type settings
File settings: Check or fill in whether to include the header, separator, encoding format and other settings of the file itself, all set the default values, no need to fill in manually
Type setting:
Vertex map and edge map:
【Vertex Type】: Select the vertex type, and upload the column data in the file for its ID mapping;
【Edge Type】: Select the edge type and map the column data of the uploaded file to the ID column of its start point type and end point type;
Mapping settings: upload the column data in the file for the attribute mapping of the selected vertex type. Here, if the attribute name is the same as the header name of the file, the mapping attribute can be automatically matched, and there is no need to manually fill in the selection.
After completing the setting, the setting list will be displayed before proceeding to the next step. It supports the operations of adding, editing and deleting mappings.
Fill in the settings map:
Mapping list:
4.3.5 Import data
Before importing, you need to fill in the import setting parameters. After filling in, you can start importing data into the gallery.
Import settings
The import setting parameter items are as shown in the figure below, all set the default value, no need to fill in manually
Import details
Click Start Import to start the file import task
The import details provide the mapping type, import speed, import progress, time-consuming and the specific status of the current task set for each uploaded file, and can pause, resume, stop and other operations for each task
If the import fails, you can view the specific reason
4.4 Data Analysis
4.4.1 Module entry
Left navigation:
4.4.2 Multi-image switching
By switching the entrance on the left, flexibly switch the operation space of multiple graphs
4.4.3 Graph Analysis and Processing
HugeGraph supports Gremlin, a graph traversal query language of Apache TinkerPop3. Gremlin is a general graph database query language. By entering Gremlin statements and clicking execute, you can perform query and analysis operations on graph data, and create and delete vertices/edges. , vertex/edge attribute modification, etc.
After Gremlin query, below is the graph result display area, which provides 3 kinds of graph result display modes: [Graph Mode], [Table Mode], [Json Mode].
Support zoom, center, full screen, export and other operations.
【Picture Mode】
【Table mode】
【Json mode】
4.4.4 Data Details
Click the vertex/edge entity to view the data details of the vertex/edge, including: vertex/edge type, vertex ID, attribute and corresponding value, expand the information display dimension of the graph, and improve the usability.
4.4.5 Multidimensional Path Query of Graph Results
In addition to the global query, in-depth customized query and hidden operations can be performed for the vertices in the query result to realize customized mining of graph results.
Right-click a vertex, and the menu entry of the vertex appears, which can be displayed, inquired, hidden, etc.
Expand: Click to display the vertices associated with the selected point.
Query: By selecting the edge type and edge direction associated with the selected point, and then selecting its attributes and corresponding filtering rules under this condition, a customized path display can be realized.
Hide: When clicked, hides the selected point and its associated edges.
Double-clicking a vertex also displays the vertex associated with the selected point.
4.4.6 Add vertex/edge
4.4.6.1 Added vertex
In the graph area, two entries can be used to dynamically add vertices, as follows:
Click on the graph area panel, the Add Vertex entry appears
Click the first icon in the action bar in the upper right corner
Complete the addition of vertices by selecting or filling in the vertex type, ID value, and attribute information.
The entry is as follows:
Add the vertex content as follows:
4.4.6.2 Add edge
Right-click a vertex in the graph result to add the outgoing or incoming edge of that point.
4.4.7 Execute the query of records and favorites
Record each query record at the bottom of the graph area, including: query time, execution type, content, status, time-consuming, as well as [collection] and [load] operations, to achieve a comprehensive record of graph execution, with traces to follow, and Can quickly load and reuse execution content
Provides the function of collecting sentences, which can be used to collect frequently used sentences, which is convenient for fast calling of high-frequency sentences.
4.5 Task Management
4.5.1 Module entry
Left navigation:
4.5.2 Task Management
Provide unified management and result viewing of asynchronous tasks. There are 4 types of asynchronous tasks, namely:
gremlin: Gremlin tasks
algorithm: OLAP algorithm task
remove_schema: remove metadata
rebuild_index: rebuild the index
The list displays the asynchronous task information of the current graph, including: task ID, task name, task type, creation time, time-consuming, status, operation, and realizes the management of asynchronous tasks.
Support filtering by task type and status
Support searching for task ID and task name
Asynchronous tasks can be deleted or deleted in batches
4.5.3 Gremlin asynchronous tasks
Create a task
The data analysis module currently supports two Gremlin operations, Gremlin query and Gremlin task; if the user switches to the Gremlin task, after clicking execute, an asynchronous task will be created in the asynchronous task center;
Task submission
After the task is submitted successfully, the graph area returns the submission result and task ID
Mission details
Provide [View] entry, you can jump to the task details to view the specific execution of the current task After jumping to the task center, the currently executing task line will be displayed directly
Click to view the entry to jump to the task management list, as follows:
View the results
The results are displayed in the form of json
4.5.4 OLAP algorithm tasks
There is no visual OLAP algorithm execution on Hubble. You can call the RESTful API to perform OLAP algorithm tasks, find the corresponding tasks by ID in the task management, and view the progress and results.
4.5.5 Delete metadata, rebuild index
Create a task
In the metadata modeling module, when deleting metadata, an asynchronous task for deleting metadata can be created
When editing an existing vertex/edge type operation, when adding an index, an asynchronous task of creating an index can be created
Task details
After confirming/saving, you can jump to the task center to view the details of the current task
4 - HugeGraph-Client Quick Start
1 Overview Of Hugegraph
HugeGraph-Client sends HTTP request to HugeGraph-Server to obtain and parse the execution result of Server. Currently only the HugeGraph-Client for Java is provided. You can use HugeGraph-Client to write Java code to operate HugeGraph, such as adding, deleting, modifying, and querying schema and graph data, or executing gremlin statements.
2 What You Need
Java 11 (also support Java 8)
Maven 3.5+
3 How To Use
The basic steps to use HugeGraph-Client are as follows:
Build a new Maven project by IDEA or Eclipse
Add HugeGraph-Client dependency in pom file;
Create an object to invoke the interface of HugeGraph-Client
See the complete example in the following section for the detail.
The module usage process of the platform is as follows:
4 Platform Instructions
4.1 Graph Management
4.1.1 Graph creation
Under the graph management module, click [Create graph], and realize the connection of multiple graphs by filling in the graph ID, graph name, host name, port number, username, and password information.
Create graph by filling in the content as follows:
Special Note: If you are starting hubble with Docker, and hubble and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to localhost/127.0.0.1. If hubble and server is in the same docker network, we recommend using the container_name (in our example, it is graph) as the hostname, and 8080 as the port. Or you can use the host IP as the hostname, and the port is configured by the host for the server.
4.1.2 Graph Access
Realize the information access of the graph space. After entering, you can perform operations such as multidimensional query analysis, metadata management, data import, and algorithm analysis of the graph.
4.1.3 Graph management
Users can achieve unified management of graphs through overview, search, and information editing and deletion of single graphs.
Search range: You can search for the graph name and ID.
4.2 Metadata Modeling (list + graph mode)
4.2.1 Module entry
Left navigation:
4.2.2 Property type
4.2.2.1 Create type
Fill in or select the attribute name, data type, and cardinality to complete the creation of the attribute.
Created attributes can be used as attributes of vertex type and edge type.
List mode:
Graph mode:
4.2.2.2 Reuse
The platform provides the [Reuse] function, which can directly reuse the metadata of other graphs.
Select the graph ID that needs to be reused, and continue to select the attributes that need to be reused. After that, the platform will check whether there is a conflict. After passing, the metadata can be reused.
Select reuse items:
Check reuse items:
4.2.2.3 Management
You can delete a single item or delete it in batches in the attribute list.
4.2.3 Vertex type
4.2.3.1 Create type
Fill in or select the vertex type name, ID strategy, association attribute, primary key attribute, vertex style, content displayed below the vertex in the query result, and index information: including whether to create a type index, and the specific content of the attribute index, complete the vertex Type creation.
List mode:
Graph mode:
4.2.3.2 Reuse
The multiplexing of vertex types will reuse the attributes and attribute indexes associated with this type together.
The reuse method is similar to the property reuse, see 3.2.2.2.
4.2.3.3 Administration
Editing operations are available. The vertex style, association type, vertex display content, and attribute index can be edited, and the rest cannot be edited.
You can delete a single item or delete it in batches.
4.2.4 Edge Types
4.2.4.1 Create
Fill in or select the edge type name, start point type, end point type, associated attributes, whether to allow multiple connections, edge style, content displayed below the edge in the query result, and index information: including whether to create a type index, and attribute index The specific content, complete the creation of the edge type.
List mode:
Graph mode:
4.2.4.2 Reuse
The reuse of the edge type will reuse the start point type, end point type, associated attribute and attribute index of this type.
The reuse method is similar to the property reuse, see 3.2.2.2.
4.2.4.3 Administration
Editing operations are available. Edge styles, associated attributes, edge display content, and attribute indexes can be edited, and the rest cannot be edited, the same as the vertex type.
You can delete a single item or delete it in batches.
4.2.5 Index Types
Displays vertex and edge indices for vertex types and edge types.
4.3 Data Import
Note:currently, we recommend to use hugegraph-loader to import data formally. The built-in import of hubble is used for testing and getting started.
The usage process of data import is as follows:
4.3.1 Module entrance
Left navigation:
4.3.2 Create task
Fill in the task name and remarks (optional) to create an import task.
Multiple import tasks can be created and imported in parallel.
4.3.3 Uploading files
Upload the file that needs to be composed. The currently supported format is CSV, which will be updated continuously in the future.
Multiple files can be uploaded at the same time.
4.3.4 Setting up data mapping
Set up data mapping for uploaded files, including file settings and type settings
File settings: Check or fill in whether to include the header, separator, encoding format and other settings of the file itself, all set the default values, no need to fill in manually
Type setting:
Vertex map and edge map:
【Vertex Type】: Select the vertex type, and upload the column data in the file for its ID mapping;
【Edge Type】: Select the edge type and map the column data of the uploaded file to the ID column of its start point type and end point type;
Mapping settings: upload the column data in the file for the attribute mapping of the selected vertex type. Here, if the attribute name is the same as the header name of the file, the mapping attribute can be automatically matched, and there is no need to manually fill in the selection.
After completing the setting, the setting list will be displayed before proceeding to the next step. It supports the operations of adding, editing and deleting mappings.
Fill in the settings map:
Mapping list:
4.3.5 Import data
Before importing, you need to fill in the import setting parameters. After filling in, you can start importing data into the gallery.
Import settings
The import setting parameter items are as shown in the figure below, all set the default value, no need to fill in manually
Import details
Click Start Import to start the file import task
The import details provide the mapping type, import speed, import progress, time-consuming and the specific status of the current task set for each uploaded file, and can pause, resume, stop and other operations for each task
If the import fails, you can view the specific reason
4.4 Data Analysis
4.4.1 Module entry
Left navigation:
4.4.2 Multi-image switching
By switching the entrance on the left, flexibly switch the operation space of multiple graphs
4.4.3 Graph Analysis and Processing
HugeGraph supports Gremlin, a graph traversal query language of Apache TinkerPop3. Gremlin is a general graph database query language. By entering Gremlin statements and clicking execute, you can perform query and analysis operations on graph data, and create and delete vertices/edges. , vertex/edge attribute modification, etc.
After Gremlin query, below is the graph result display area, which provides 3 kinds of graph result display modes: [Graph Mode], [Table Mode], [Json Mode].
Support zoom, center, full screen, export and other operations.
【Picture Mode】
【Table mode】
【Json mode】
4.4.4 Data Details
Click the vertex/edge entity to view the data details of the vertex/edge, including: vertex/edge type, vertex ID, attribute and corresponding value, expand the information display dimension of the graph, and improve the usability.
4.4.5 Multidimensional Path Query of Graph Results
In addition to the global query, in-depth customized query and hidden operations can be performed for the vertices in the query result to realize customized mining of graph results.
Right-click a vertex, and the menu entry of the vertex appears, which can be displayed, inquired, hidden, etc.
Expand: Click to display the vertices associated with the selected point.
Query: By selecting the edge type and edge direction associated with the selected point, and then selecting its attributes and corresponding filtering rules under this condition, a customized path display can be realized.
Hide: When clicked, hides the selected point and its associated edges.
Double-clicking a vertex also displays the vertex associated with the selected point.
4.4.6 Add vertex/edge
4.4.6.1 Added vertex
In the graph area, two entries can be used to dynamically add vertices, as follows:
Click on the graph area panel, the Add Vertex entry appears
Click the first icon in the action bar in the upper right corner
Complete the addition of vertices by selecting or filling in the vertex type, ID value, and attribute information.
The entry is as follows:
Add the vertex content as follows:
4.4.6.2 Add edge
Right-click a vertex in the graph result to add the outgoing or incoming edge of that point.
4.4.7 Execute the query of records and favorites
Record each query record at the bottom of the graph area, including: query time, execution type, content, status, time-consuming, as well as [collection] and [load] operations, to achieve a comprehensive record of graph execution, with traces to follow, and Can quickly load and reuse execution content
Provides the function of collecting sentences, which can be used to collect frequently used sentences, which is convenient for fast calling of high-frequency sentences.
4.5 Task Management
4.5.1 Module entry
Left navigation:
4.5.2 Task Management
Provide unified management and result viewing of asynchronous tasks. There are 4 types of asynchronous tasks, namely:
gremlin: Gremlin tasks
algorithm: OLAP algorithm task
remove_schema: remove metadata
rebuild_index: rebuild the index
The list displays the asynchronous task information of the current graph, including: task ID, task name, task type, creation time, time-consuming, status, operation, and realizes the management of asynchronous tasks.
Support filtering by task type and status
Support searching for task ID and task name
Asynchronous tasks can be deleted or deleted in batches
4.5.3 Gremlin asynchronous tasks
Create a task
The data analysis module currently supports two Gremlin operations, Gremlin query and Gremlin task; if the user switches to the Gremlin task, after clicking execute, an asynchronous task will be created in the asynchronous task center;
Task submission
After the task is submitted successfully, the graph area returns the submission result and task ID
Mission details
Provide [View] entry, you can jump to the task details to view the specific execution of the current task After jumping to the task center, the currently executing task line will be displayed directly
Click to view the entry to jump to the task management list, as follows:
View the results
The results are displayed in the form of json
4.5.4 OLAP algorithm tasks
There is no visual OLAP algorithm execution on Hubble. You can call the RESTful API to perform OLAP algorithm tasks, find the corresponding tasks by ID in the task management, and view the progress and results.
4.5.5 Delete metadata, rebuild index
Create a task
In the metadata modeling module, when deleting metadata, an asynchronous task for deleting metadata can be created
When editing an existing vertex/edge type operation, when adding an index, an asynchronous task of creating an index can be created
Task details
After confirming/saving, you can jump to the task center to view the details of the current task
4 - HugeGraph-Client Quick Start
1 Overview Of Hugegraph
HugeGraph-Client sends HTTP request to HugeGraph-Server to obtain and parse the execution result of Server. Currently only the HugeGraph-Client for Java is provided. You can use HugeGraph-Client to write Java code to operate HugeGraph, such as adding, deleting, modifying, and querying schema and graph data, or executing gremlin statements.
2 What You Need
Java 11 (also support Java 8)
Maven 3.5+
3 How To Use
The basic steps to use HugeGraph-Client are as follows:
Build a new Maven project by IDEA or Eclipse
Add HugeGraph-Client dependency in pom file;
Create an object to invoke the interface of HugeGraph-Client
See the complete example in the following section for the detail.
<dependencies><dependency><groupId>org.apache.hugegraph</groupId><artifactId>hugegraph-client</artifactId>
diff --git a/docs/quickstart/hugegraph-hubble/index.html b/docs/quickstart/hugegraph-hubble/index.html
index 048b13a0b..f457184be 100644
--- a/docs/quickstart/hugegraph-hubble/index.html
+++ b/docs/quickstart/hugegraph-hubble/index.html
@@ -1,17 +1,17 @@
HugeGraph-Hubble Quick Start | HugeGraph
+HugeGraph is an analysis-oriented graph database system that supports batch operations, which fully supports Apache …">
HugeGraph-Hubble Quick Start
1 HugeGraph-Hubble Overview
HugeGraph is an analysis-oriented graph database system that supports batch operations, which fully supports Apache TinkerPop3 framework and Gremlin graph query language. It provides a complete tool chain ecology such as export, backup, and recovery, and effectively solve the storage, query and correlation analysis needs of massive graph data. HugeGraph is widely used in the fields of risk control, insurance claims, recommendation search, public security crime crackdown, knowledge graph, network security, IT operation and maintenance of bank securities companies, and is committed to allowing more industries, organizations and users to enjoy a wider range of data comprehensive value.
HugeGraph-Hubble is HugeGraph’s one-stop visual analysis platform. The platform covers the whole process from data modeling, to efficient data import, to real-time and offline analysis of data, and unified management of graphs, realizing the whole process wizard of graph application. It is designed to improve the user’s use fluency, lower the user’s use threshold, and provide a more efficient and easy-to-use user experience.
The platform mainly includes the following modules:
Graph Management
The graph management module realizes the unified management of multiple graphs and graph access, editing, deletion, and query by creating graph and connecting the platform and graph data.
Metadata Modeling
The metadata modeling module realizes the construction and management of graph models by creating attribute libraries, vertex types, edge types, and index types. The platform provides two modes, list mode and graph mode, which can display the metadata model in real time, which is more intuitive. At the same time, it also provides a metadata reuse function across graphs, which saves the tedious and repetitive creation process of the same metadata, greatly improves modeling efficiency and enhances ease of use.
Data Import
Data import is to convert the user’s business data into the vertices and edges of the graph and insert it into the graph database. The platform provides a wizard-style visual import module. By creating import tasks, the management of import tasks and the parallel operation of multiple import tasks are realized. Improve import performance. After entering the import task, you only need to follow the platform step prompts, upload files as needed, and fill in the content to easily implement the import process of graph data. At the same time, it supports breakpoint resuming, error retry mechanism, etc., which reduces import costs and improves efficiency.
Graph Analysis
By inputting the graph traversal language Gremlin, high-performance general analysis of graph data can be realized, and functions such as customized multidimensional path query of vertices can be provided, and three kinds of graph result display methods are provided, including: graph form, table form, Json form, and multidimensional display. The data form meets the needs of various scenarios used by users. It provides functions such as running records and collection of common statements, realizing the traceability of graph operations, and the reuse and sharing of query input, which is fast and efficient. It supports the export of graph data, and the export format is Json format.
Task Management
For Gremlin tasks that need to traverse the whole graph, index creation and reconstruction and other time-consuming asynchronous tasks, the platform provides corresponding task management functions to achieve unified management and result viewing of asynchronous tasks.
2 Deploy
There are three ways to deplot hugegraph-hubble
Use Docker (recommended)
Download the Toolchain binary package
Source code compilation
2.1 Use docker (recommended)
Special Note: If you are starting hubble with Docker, and hubble and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to localhost/127.0.0.1. This will refer to the hubble container internally rather than the host machine, resulting in a connection failure to the server. If hubble and server is in the same docker network, you can use the container_name as the hostname, and 8080 as the port. Or you can use the ip of the host as the hostname, and the port is configured by the host for the server.
We can use docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble to quick start hubble.
Alternatively, you can use Docker Compose to start hubble. Additionally, if hubble and the graph are in the same Docker network, you can access the graph using the container name of the graph, eliminating the need for the host machine’s IP address.
Use docker-compose up -d,docker-compose.yml is following:
HugeGraph is an analysis-oriented graph database system that supports batch operations, which fully supports Apache TinkerPop3 framework and Gremlin graph query language. It provides a complete tool chain ecology such as export, backup, and recovery, and effectively solve the storage, query and correlation analysis needs of massive graph data. HugeGraph is widely used in the fields of risk control, insurance claims, recommendation search, public security crime crackdown, knowledge graph, network security, IT operation and maintenance of bank securities companies, and is committed to allowing more industries, organizations and users to enjoy a wider range of data comprehensive value.
HugeGraph-Hubble is HugeGraph’s one-stop visual analysis platform. The platform covers the whole process from data modeling, to efficient data import, to real-time and offline analysis of data, and unified management of graphs, realizing the whole process wizard of graph application. It is designed to improve the user’s use fluency, lower the user’s use threshold, and provide a more efficient and easy-to-use user experience.
The platform mainly includes the following modules:
Graph Management
The graph management module realizes the unified management of multiple graphs and graph access, editing, deletion, and query by creating graph and connecting the platform and graph data.
Metadata Modeling
The metadata modeling module realizes the construction and management of graph models by creating attribute libraries, vertex types, edge types, and index types. The platform provides two modes, list mode and graph mode, which can display the metadata model in real time, which is more intuitive. At the same time, it also provides a metadata reuse function across graphs, which saves the tedious and repetitive creation process of the same metadata, greatly improves modeling efficiency and enhances ease of use.
Data Import
Data import is to convert the user’s business data into the vertices and edges of the graph and insert it into the graph database. The platform provides a wizard-style visual import module. By creating import tasks, the management of import tasks and the parallel operation of multiple import tasks are realized. Improve import performance. After entering the import task, you only need to follow the platform step prompts, upload files as needed, and fill in the content to easily implement the import process of graph data. At the same time, it supports breakpoint resuming, error retry mechanism, etc., which reduces import costs and improves efficiency.
Graph Analysis
By inputting the graph traversal language Gremlin, high-performance general analysis of graph data can be realized, and functions such as customized multidimensional path query of vertices can be provided, and three kinds of graph result display methods are provided, including: graph form, table form, Json form, and multidimensional display. The data form meets the needs of various scenarios used by users. It provides functions such as running records and collection of common statements, realizing the traceability of graph operations, and the reuse and sharing of query input, which is fast and efficient. It supports the export of graph data, and the export format is Json format.
Task Management
For Gremlin tasks that need to traverse the whole graph, index creation and reconstruction and other time-consuming asynchronous tasks, the platform provides corresponding task management functions to achieve unified management and result viewing of asynchronous tasks.
2 Deploy
There are three ways to deplot hugegraph-hubble
Use Docker (recommended)
Download the Toolchain binary package
Source code compilation
2.1 Use docker (recommended)
Special Note: If you are starting hubble with Docker, and hubble and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to localhost/127.0.0.1. This will refer to the hubble container internally rather than the host machine, resulting in a connection failure to the server.
If hubble and server is in the same docker network, we recommend using the container_name (in our example, it is graph) as the hostname, and 8080 as the port. Or you can use the host IP as the hostname, and the port is configured by the host for the server.
We can use docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble to quick start hubble.
Alternatively, you can use Docker Compose to start hubble. Additionally, if hubble and the graph are in the same Docker network, you can access the graph using the container name of the graph, eliminating the need for the host machine’s IP address.
Use docker-compose up -d,docker-compose.yml is following:
The module usage process of the platform is as follows:
4 Platform Instructions
4.1 Graph Management
4.1.1 Graph creation
Under the graph management module, click [Create graph], and realize the connection of multiple graphs by filling in the graph ID, graph name, host name, port number, username, and password information.
Create graph by filling in the content as follows::
4.1.2 Graph Access
Realize the information access of the graph space. After entering, you can perform operations such as multidimensional query analysis, metadata management, data import, and algorithm analysis of the graph.
4.1.3 Graph management
Users can achieve unified management of graphs through overview, search, and information editing and deletion of single graphs.
Search range: You can search for the graph name and ID.
4.2 Metadata Modeling (list + graph mode)
4.2.1 Module entry
Left navigation:
4.2.2 Property type
4.2.2.1 Create type
Fill in or select the attribute name, data type, and cardinality to complete the creation of the attribute.
Created attributes can be used as attributes of vertex type and edge type.
List mode:
Graph mode:
4.2.2.2 Reuse
The platform provides the [Reuse] function, which can directly reuse the metadata of other graphs.
Select the graph ID that needs to be reused, and continue to select the attributes that need to be reused. After that, the platform will check whether there is a conflict. After passing, the metadata can be reused.
Select reuse items:
Check reuse items:
4.2.2.3 Management
You can delete a single item or delete it in batches in the attribute list.
4.2.3 Vertex type
4.2.3.1 Create type
Fill in or select the vertex type name, ID strategy, association attribute, primary key attribute, vertex style, content displayed below the vertex in the query result, and index information: including whether to create a type index, and the specific content of the attribute index, complete the vertex Type creation.
List mode:
Graph mode:
4.2.3.2 Reuse
The multiplexing of vertex types will reuse the attributes and attribute indexes associated with this type together.
The reuse method is similar to the property reuse, see 3.2.2.2.
4.2.3.3 Administration
Editing operations are available. The vertex style, association type, vertex display content, and attribute index can be edited, and the rest cannot be edited.
You can delete a single item or delete it in batches.
4.2.4 Edge Types
4.2.4.1 Create
Fill in or select the edge type name, start point type, end point type, associated attributes, whether to allow multiple connections, edge style, content displayed below the edge in the query result, and index information: including whether to create a type index, and attribute index The specific content, complete the creation of the edge type.
List mode:
Graph mode:
4.2.4.2 Reuse
The reuse of the edge type will reuse the start point type, end point type, associated attribute and attribute index of this type.
The reuse method is similar to the property reuse, see 3.2.2.2.
4.2.4.3 Administration
Editing operations are available. Edge styles, associated attributes, edge display content, and attribute indexes can be edited, and the rest cannot be edited, the same as the vertex type.
You can delete a single item or delete it in batches.
4.2.5 Index Types
Displays vertex and edge indices for vertex types and edge types.
4.3 Data Import
The usage process of data import is as follows:
4.3.1 Module entrance
Left navigation:
4.3.2 Create task
Fill in the task name and remarks (optional) to create an import task.
Multiple import tasks can be created and imported in parallel.
4.3.3 Uploading files
Upload the file that needs to be composed. The currently supported format is CSV, which will be updated continuously in the future.
Multiple files can be uploaded at the same time.
4.3.4 Setting up data mapping
Set up data mapping for uploaded files, including file settings and type settings
File settings: Check or fill in whether to include the header, separator, encoding format and other settings of the file itself, all set the default values, no need to fill in manually
Type setting:
Vertex map and edge map:
【Vertex Type】: Select the vertex type, and upload the column data in the file for its ID mapping;
【Edge Type】: Select the edge type and map the column data of the uploaded file to the ID column of its start point type and end point type;
Mapping settings: upload the column data in the file for the attribute mapping of the selected vertex type. Here, if the attribute name is the same as the header name of the file, the mapping attribute can be automatically matched, and there is no need to manually fill in the selection.
After completing the setting, the setting list will be displayed before proceeding to the next step. It supports the operations of adding, editing and deleting mappings.
Fill in the settings map:
Mapping list:
4.3.5 Import data
Before importing, you need to fill in the import setting parameters. After filling in, you can start importing data into the gallery.
Import settings
The import setting parameter items are as shown in the figure below, all set the default value, no need to fill in manually
Import details
Click Start Import to start the file import task
The import details provide the mapping type, import speed, import progress, time-consuming and the specific status of the current task set for each uploaded file, and can pause, resume, stop and other operations for each task
If the import fails, you can view the specific reason
4.4 Data Analysis
4.4.1 Module entry
Left navigation:
4.4.2 Multi-image switching
By switching the entrance on the left, flexibly switch the operation space of multiple graphs
4.4.3 Graph Analysis and Processing
HugeGraph supports Gremlin, a graph traversal query language of Apache TinkerPop3. Gremlin is a general graph database query language. By entering Gremlin statements and clicking execute, you can perform query and analysis operations on graph data, and create and delete vertices/edges. , vertex/edge attribute modification, etc.
After Gremlin query, below is the graph result display area, which provides 3 kinds of graph result display modes: [Graph Mode], [Table Mode], [Json Mode].
Support zoom, center, full screen, export and other operations.
【Picture Mode】
【Table mode】
【Json mode】
4.4.4 Data Details
Click the vertex/edge entity to view the data details of the vertex/edge, including: vertex/edge type, vertex ID, attribute and corresponding value, expand the information display dimension of the graph, and improve the usability.
4.4.5 Multidimensional Path Query of Graph Results
In addition to the global query, in-depth customized query and hidden operations can be performed for the vertices in the query result to realize customized mining of graph results.
Right-click a vertex, and the menu entry of the vertex appears, which can be displayed, inquired, hidden, etc.
Expand: Click to display the vertices associated with the selected point.
Query: By selecting the edge type and edge direction associated with the selected point, and then selecting its attributes and corresponding filtering rules under this condition, a customized path display can be realized.
Hide: When clicked, hides the selected point and its associated edges.
Double-clicking a vertex also displays the vertex associated with the selected point.
4.4.6 Add vertex/edge
4.4.6.1 Added vertex
In the graph area, two entries can be used to dynamically add vertices, as follows:
Click on the graph area panel, the Add Vertex entry appears
Click the first icon in the action bar in the upper right corner
Complete the addition of vertices by selecting or filling in the vertex type, ID value, and attribute information.
The entry is as follows:
Add the vertex content as follows:
4.4.6.2 Add edge
Right-click a vertex in the graph result to add the outgoing or incoming edge of that point.
4.4.7 Execute the query of records and favorites
Record each query record at the bottom of the graph area, including: query time, execution type, content, status, time-consuming, as well as [collection] and [load] operations, to achieve a comprehensive record of graph execution, with traces to follow, and Can quickly load and reuse execution content
Provides the function of collecting sentences, which can be used to collect frequently used sentences, which is convenient for fast calling of high-frequency sentences.
4.5 Task Management
4.5.1 Module entry
Left navigation:
4.5.2 Task Management
Provide unified management and result viewing of asynchronous tasks. There are 4 types of asynchronous tasks, namely:
gremlin: Gremlin tasks
algorithm: OLAP algorithm task
remove_schema: remove metadata
rebuild_index: rebuild the index
The list displays the asynchronous task information of the current graph, including: task ID, task name, task type, creation time, time-consuming, status, operation, and realizes the management of asynchronous tasks.
Support filtering by task type and status
Support searching for task ID and task name
Asynchronous tasks can be deleted or deleted in batches
4.5.3 Gremlin asynchronous tasks
Create a task
The data analysis module currently supports two Gremlin operations, Gremlin query and Gremlin task; if the user switches to the Gremlin task, after clicking execute, an asynchronous task will be created in the asynchronous task center;
Task submission
After the task is submitted successfully, the graph area returns the submission result and task ID
Mission details
Provide [View] entry, you can jump to the task details to view the specific execution of the current task After jumping to the task center, the currently executing task line will be displayed directly
Click to view the entry to jump to the task management list, as follows:
View the results
The results are displayed in the form of json
4.5.4 OLAP algorithm tasks
There is no visual OLAP algorithm execution on Hubble. You can call the RESTful API to perform OLAP algorithm tasks, find the corresponding tasks by ID in the task management, and view the progress and results.
4.5.5 Delete metadata, rebuild index
Create a task
In the metadata modeling module, when deleting metadata, an asynchronous task for deleting metadata can be created
When editing an existing vertex/edge type operation, when adding an index, an asynchronous task of creating an index can be created
Task details
After confirming/saving, you can jump to the task center to view the details of the current task
The module usage process of the platform is as follows:
4 Platform Instructions
4.1 Graph Management
4.1.1 Graph creation
Under the graph management module, click [Create graph], and realize the connection of multiple graphs by filling in the graph ID, graph name, host name, port number, username, and password information.
Create graph by filling in the content as follows:
Special Note: If you are starting hubble with Docker, and hubble and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to localhost/127.0.0.1. If hubble and server is in the same docker network, we recommend using the container_name (in our example, it is graph) as the hostname, and 8080 as the port. Or you can use the host IP as the hostname, and the port is configured by the host for the server.
4.1.2 Graph Access
Realize the information access of the graph space. After entering, you can perform operations such as multidimensional query analysis, metadata management, data import, and algorithm analysis of the graph.
4.1.3 Graph management
Users can achieve unified management of graphs through overview, search, and information editing and deletion of single graphs.
Search range: You can search for the graph name and ID.
4.2 Metadata Modeling (list + graph mode)
4.2.1 Module entry
Left navigation:
4.2.2 Property type
4.2.2.1 Create type
Fill in or select the attribute name, data type, and cardinality to complete the creation of the attribute.
Created attributes can be used as attributes of vertex type and edge type.
List mode:
Graph mode:
4.2.2.2 Reuse
The platform provides the [Reuse] function, which can directly reuse the metadata of other graphs.
Select the graph ID that needs to be reused, and continue to select the attributes that need to be reused. After that, the platform will check whether there is a conflict. After passing, the metadata can be reused.
Select reuse items:
Check reuse items:
4.2.2.3 Management
You can delete a single item or delete it in batches in the attribute list.
4.2.3 Vertex type
4.2.3.1 Create type
Fill in or select the vertex type name, ID strategy, association attribute, primary key attribute, vertex style, content displayed below the vertex in the query result, and index information: including whether to create a type index, and the specific content of the attribute index, complete the vertex Type creation.
List mode:
Graph mode:
4.2.3.2 Reuse
The multiplexing of vertex types will reuse the attributes and attribute indexes associated with this type together.
The reuse method is similar to the property reuse, see 3.2.2.2.
4.2.3.3 Administration
Editing operations are available. The vertex style, association type, vertex display content, and attribute index can be edited, and the rest cannot be edited.
You can delete a single item or delete it in batches.
4.2.4 Edge Types
4.2.4.1 Create
Fill in or select the edge type name, start point type, end point type, associated attributes, whether to allow multiple connections, edge style, content displayed below the edge in the query result, and index information: including whether to create a type index, and attribute index The specific content, complete the creation of the edge type.
List mode:
Graph mode:
4.2.4.2 Reuse
The reuse of the edge type will reuse the start point type, end point type, associated attribute and attribute index of this type.
The reuse method is similar to the property reuse, see 3.2.2.2.
4.2.4.3 Administration
Editing operations are available. Edge styles, associated attributes, edge display content, and attribute indexes can be edited, and the rest cannot be edited, the same as the vertex type.
You can delete a single item or delete it in batches.
4.2.5 Index Types
Displays vertex and edge indices for vertex types and edge types.
4.3 Data Import
Note:currently, we recommend to use hugegraph-loader to import data formally. The built-in import of hubble is used for testing and getting started.
The usage process of data import is as follows:
4.3.1 Module entrance
Left navigation:
4.3.2 Create task
Fill in the task name and remarks (optional) to create an import task.
Multiple import tasks can be created and imported in parallel.
4.3.3 Uploading files
Upload the file that needs to be composed. The currently supported format is CSV, which will be updated continuously in the future.
Multiple files can be uploaded at the same time.
4.3.4 Setting up data mapping
Set up data mapping for uploaded files, including file settings and type settings
File settings: Check or fill in whether to include the header, separator, encoding format and other settings of the file itself, all set the default values, no need to fill in manually
Type setting:
Vertex map and edge map:
【Vertex Type】: Select the vertex type, and upload the column data in the file for its ID mapping;
【Edge Type】: Select the edge type and map the column data of the uploaded file to the ID column of its start point type and end point type;
Mapping settings: upload the column data in the file for the attribute mapping of the selected vertex type. Here, if the attribute name is the same as the header name of the file, the mapping attribute can be automatically matched, and there is no need to manually fill in the selection.
After completing the setting, the setting list will be displayed before proceeding to the next step. It supports the operations of adding, editing and deleting mappings.
Fill in the settings map:
Mapping list:
4.3.5 Import data
Before importing, you need to fill in the import setting parameters. After filling in, you can start importing data into the gallery.
Import settings
The import setting parameter items are as shown in the figure below, all set the default value, no need to fill in manually
Import details
Click Start Import to start the file import task
The import details provide the mapping type, import speed, import progress, time-consuming and the specific status of the current task set for each uploaded file, and can pause, resume, stop and other operations for each task
If the import fails, you can view the specific reason
4.4 Data Analysis
4.4.1 Module entry
Left navigation:
4.4.2 Multi-image switching
By switching the entrance on the left, flexibly switch the operation space of multiple graphs
4.4.3 Graph Analysis and Processing
HugeGraph supports Gremlin, a graph traversal query language of Apache TinkerPop3. Gremlin is a general graph database query language. By entering Gremlin statements and clicking execute, you can perform query and analysis operations on graph data, and create and delete vertices/edges. , vertex/edge attribute modification, etc.
After Gremlin query, below is the graph result display area, which provides 3 kinds of graph result display modes: [Graph Mode], [Table Mode], [Json Mode].
Support zoom, center, full screen, export and other operations.
【Picture Mode】
【Table mode】
【Json mode】
4.4.4 Data Details
Click the vertex/edge entity to view the data details of the vertex/edge, including: vertex/edge type, vertex ID, attribute and corresponding value, expand the information display dimension of the graph, and improve the usability.
4.4.5 Multidimensional Path Query of Graph Results
In addition to the global query, in-depth customized query and hidden operations can be performed for the vertices in the query result to realize customized mining of graph results.
Right-click a vertex, and the menu entry of the vertex appears, which can be displayed, inquired, hidden, etc.
Expand: Click to display the vertices associated with the selected point.
Query: By selecting the edge type and edge direction associated with the selected point, and then selecting its attributes and corresponding filtering rules under this condition, a customized path display can be realized.
Hide: When clicked, hides the selected point and its associated edges.
Double-clicking a vertex also displays the vertex associated with the selected point.
4.4.6 Add vertex/edge
4.4.6.1 Added vertex
In the graph area, two entries can be used to dynamically add vertices, as follows:
Click on the graph area panel, the Add Vertex entry appears
Click the first icon in the action bar in the upper right corner
Complete the addition of vertices by selecting or filling in the vertex type, ID value, and attribute information.
The entry is as follows:
Add the vertex content as follows:
4.4.6.2 Add edge
Right-click a vertex in the graph result to add the outgoing or incoming edge of that point.
4.4.7 Execute the query of records and favorites
Record each query record at the bottom of the graph area, including: query time, execution type, content, status, time-consuming, as well as [collection] and [load] operations, to achieve a comprehensive record of graph execution, with traces to follow, and Can quickly load and reuse execution content
Provides the function of collecting sentences, which can be used to collect frequently used sentences, which is convenient for fast calling of high-frequency sentences.
4.5 Task Management
4.5.1 Module entry
Left navigation:
4.5.2 Task Management
Provide unified management and result viewing of asynchronous tasks. There are 4 types of asynchronous tasks, namely:
gremlin: Gremlin tasks
algorithm: OLAP algorithm task
remove_schema: remove metadata
rebuild_index: rebuild the index
The list displays the asynchronous task information of the current graph, including: task ID, task name, task type, creation time, time-consuming, status, operation, and realizes the management of asynchronous tasks.
Support filtering by task type and status
Support searching for task ID and task name
Asynchronous tasks can be deleted or deleted in batches
4.5.3 Gremlin asynchronous tasks
Create a task
The data analysis module currently supports two Gremlin operations, Gremlin query and Gremlin task; if the user switches to the Gremlin task, after clicking execute, an asynchronous task will be created in the asynchronous task center;
Task submission
After the task is submitted successfully, the graph area returns the submission result and task ID
Mission details
Provide [View] entry, you can jump to the task details to view the specific execution of the current task After jumping to the task center, the currently executing task line will be displayed directly
Click to view the entry to jump to the task management list, as follows:
View the results
The results are displayed in the form of json
4.5.4 OLAP algorithm tasks
There is no visual OLAP algorithm execution on Hubble. You can call the RESTful API to perform OLAP algorithm tasks, find the corresponding tasks by ID in the task management, and view the progress and results.
4.5.5 Delete metadata, rebuild index
Create a task
In the metadata modeling module, when deleting metadata, an asynchronous task for deleting metadata can be created
When editing an existing vertex/edge type operation, when adding an index, an asynchronous task of creating an index can be created
Task details
After confirming/saving, you can jump to the task center to view the details of the current task
diff --git a/docs/quickstart/hugegraph-server/index.html b/docs/quickstart/hugegraph-server/index.html
index f2fdcf60b..a21a82b45 100644
--- a/docs/quickstart/hugegraph-server/index.html
+++ b/docs/quickstart/hugegraph-server/index.html
@@ -2,9 +2,9 @@
HugeGraph-Server is the core part of the HugeGraph Project, contains submodules such as Core, Backend, API.
The Core …">
HugeGraph-Server Quick Start
1 HugeGraph-Server Overview
HugeGraph-Server is the core part of the HugeGraph Project, contains submodules such as Core, Backend, API.
The Core Module is an implementation of the Tinkerpop interface; The Backend module is used to save the graph data to the data store, currently supported backends include: Memory, Cassandra, ScyllaDB, RocksDB; The API Module provides HTTP Server, which converts Client’s HTTP request into a call to Core Module.
There will be two spellings HugeGraph-Server and HugeGraphServer in the document, and other modules are similar. There is no big difference in the meaning of these two ways of writing, which can be distinguished as follows: HugeGraph-Server represents the code of server-related components, HugeGraphServer represents the service process.
2 Dependency for Building/Running
2.1 Install Java 11 (JDK 11)
Consider use Java 11 to run HugeGraph-Server (also compatible with Java 8 now), and configure by yourself.
Be sure to execute the java -version command to check the jdk version before reading
3 Deploy
There are four ways to deploy HugeGraph-Server components:
HugeGraph-Server is the core part of the HugeGraph Project, contains submodules such as Core, Backend, API.
The Core Module is an implementation of the Tinkerpop interface; The Backend module is used to save the graph data to the data store, currently supported backends include: Memory, Cassandra, ScyllaDB, RocksDB; The API Module provides HTTP Server, which converts Client’s HTTP request into a call to Core Module.
There will be two spellings HugeGraph-Server and HugeGraphServer in the document, and other modules are similar. There is no big difference in the meaning of these two ways of writing, which can be distinguished as follows: HugeGraph-Server represents the code of server-related components, HugeGraphServer represents the service process.
2 Dependency for Building/Running
2.1 Install Java 11 (JDK 11)
Consider use Java 11 to run HugeGraph-Server (also compatible with Java 8 now), and configure by yourself.
Be sure to execute the java -version command to check the jdk version before reading
3 Deploy
There are four ways to deploy HugeGraph-Server components:
We can use docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph to quickly start an inner HugeGraph server with RocksDB in background.
Optional:
use docker exec -it graph bash to enter the container to do some operations.
use docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph to start with a built-in example graph. We can use RESTful API to verify the result. The detailed step can refer to 5.1.1
Also, if we want to manage the other Hugegraph related instances in one file, we can use docker-compose to deploy, with the command docker-compose up -d (you can config only server). Here is an example docker-compose.yml:
version:'3'services:graph:image:hugegraph/hugegraph
-#environment:
+# environment:# - PRELOAD=true
+# PRELOAD is a option to preload a build-in sample graph when initializing.ports:
-- 18080:8080
+- 8080:8080
3.2 Download the binary tar tarball
You could download the binary tarball from the download page of ASF site like this:
# use the latest version, here is 1.0.0 for examplewget https://downloads.apache.org/incubator/hugegraph/1.0.0/apache-hugegraph-incubating-1.0.0.tar.gz
tar zxf *hugegraph*.tar.gz
@@ -68,7 +69,7 @@
cd *hugegraph*/*tool*
note: ${version} is the version, The latest version can refer to Download Page, or click the link to download directly from the Download page
The general entry script for HugeGraph-Tools is bin/hugegraph, Users can use the help command to view its usage, here only the commands for one-click deployment are introduced.
{hugegraph-version} indicates the version of HugeGraphServer and HugeGraphStudio to be deployed, users can view the conf/version-mapping.yaml file for version information, {install-path} specify the installation directory of HugeGraphServer and HugeGraphStudio, {download-path-prefix} optional, specify the download address of HugeGraphServer and HugeGraphStudio tarball, use default download URL if not provided, for example, to start HugeGraph-Server and HugeGraphStudio version 0.6, write the above command as bin/hugegraph deploy -v 0.6 -p services.
4 Config
If you need to quickly start HugeGraph just for testing, then you only need to modify a few configuration items (see next section).
-for detailed configuration introduction, please refer to configuration document and introduction to configuration items
5 Startup
5.1 Use Docker to startup
In 3.1 Use Docker container, we have introduced how to use docker to deploy hugegraph-server. server can also preload an example graph by setting the parameter.
5.1.1 Create example graph when starting server
Set the environment variable PRELOAD=true when starting Docker in order to load data during the execution of the startup script.
Use docker run
Use docker run -itd --name=graph -p 18080:8080 -e PRELOAD=true hugegraph/hugegraph:latest
In 3.1 Use Docker container, we have introduced how to use docker to deploy hugegraph-server. server can also preload an example graph by setting the parameter.
5.1.1 Create example graph when starting server
Set the environment variable PRELOAD=true when starting Docker in order to load data during the execution of the startup script.
Use docker run
Use docker run -itd --name=graph -p 8080:8080 -e PRELOAD=true hugegraph/hugegraph:latest
Use docker-compose
Create docker-compose.yml as following. We should set the environment variable PRELOAD=true. example.groovy is a predefined script to preload the sample data. If needed, we can mount a new example.groovy to change the preload data.
Since the scylladb database itself is an “optimized version” based on cassandra, if the user does not have scylladb installed, they can also use cassandra as the backend storage directly. They only need to change the backend and serializer to scylladb, and the host and post point to the seeds and port of the cassandra cluster. Yes, but it is not recommended to do so, it will not take advantage of scylladb itself.
Initialize the database (required only on first startup)
cd *hugegraph-${version}
+
Since the scylladb database itself is an “optimized version” based on cassandra, if the user does not have scylladb installed, they can also use cassandra as the backend storage directly. They only need to change the backend and serializer to scylladb, and the host and post point to the seeds and port of the cassandra cluster. Yes, but it is not recommended to do so, it will not take advantage of scylladb itself.
Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)
diff --git a/docs/quickstart/index.xml b/docs/quickstart/index.xml
index eb7b40148..53978da73 100644
--- a/docs/quickstart/index.xml
+++ b/docs/quickstart/index.xml
@@ -24,17 +24,18 @@
<p>Optional:</p>
<ol>
<li>use <code>docker exec -it graph bash</code> to enter the container to do some operations.</li>
-<li>use <code>docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph</code> to start with a <strong>built-in</strong> example graph.</li>
+<li>use <code>docker run -itd --name=graph -p 8080:8080 -e PRELOAD="true" hugegraph/hugegraph</code> to start with a <strong>built-in</strong> example graph. We can use <code>RESTful API</code> to verify the result. The detailed step can refer to <a href="http://127.0.0.1:1313/docs/quickstart/hugegraph-server/#511-create-example-graph-when-starting-server">5.1.1</a></li>
</ol>
-<p>Also, we can use <code>docker-compose</code> to deploy, with <code>docker-compose up -d</code>. Here is an example <code>docker-compose.yml</code>:</p>
+<p>Also, if we want to manage the other Hugegraph related instances in one file, we can use <code>docker-compose</code> to deploy, with the command <code>docker-compose up -d</code> (you can config only <code>server</code>). Here is an example <code>docker-compose.yml</code>:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#204a87;font-weight:bold">version</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#4e9a06">'3'</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"></span><span style="color:#204a87;font-weight:bold">services</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">graph</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">image</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">hugegraph/hugegraph</span><span style="color:#f8f8f8;text-decoration:underline">
-</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic">#environment:</span><span style="color:#f8f8f8;text-decoration:underline">
+</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic"># environment:</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic"># - PRELOAD=true</span><span style="color:#f8f8f8;text-decoration:underline">
+</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#8f5902;font-style:italic"># PRELOAD is a option to preload a build-in sample graph when initializing.</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">ports</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline">
-</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">18080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline">
+</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span></code></pre></div><h4 id="32-download-the-binary-tar-tarball">3.2 Download the binary tar tarball</h4>
<p>You could download the binary tarball from the download page of ASF site like this:</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#8f5902;font-style:italic"># use the latest version, here is 1.0.0 for example</span>
@@ -113,11 +114,11 @@ for detailed configuration introduction, please refer to <a href="/docs/confi
<ol>
<li>
<p>Use <code>docker run</code></p>
-<p>Use <code>docker run -itd --name=graph -p 18080:8080 -e PRELOAD=true hugegraph/hugegraph:latest</code></p>
+<p>Use <code>docker run -itd --name=graph -p 8080:8080 -e PRELOAD=true hugegraph/hugegraph:latest</code></p>
</li>
<li>
<p>Use <code>docker-compose</code></p>
-<p>Create <code>docker-compose.yml</code> as following</p>
+<p>Create <code>docker-compose.yml</code> as following. We should set the environment variable <code>PRELOAD=true</code>. <a href="https://github.com/apache/incubator-hugegraph/blob/master/hugegraph-dist/src/assembly/static/scripts/example.groovy"><code>example.groovy</code></a> is a predefined script to preload the sample data. If needed, we can mount a new <code>example.groovy</code> to change the preload data.</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#204a87;font-weight:bold">version</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#4e9a06">'3'</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">services</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">graph</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline">
@@ -126,7 +127,7 @@ for detailed configuration introduction, please refer to <a href="/docs/confi
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">environment</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#000">PRELOAD=true</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">ports</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline">
-</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">18080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline">
+</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span></code></pre></div><p>Use <code>docker-compose up -d</code> to start the container</p>
</li>
</ol>
@@ -168,7 +169,7 @@ after the service is stopped artificially, or when the service needs to be start
</span></span><span style="display:flex;"><span>serializer=binary
</span></span><span style="display:flex;"><span>rocksdb.data_path=.
</span></span><span style="display:flex;"><span>rocksdb.wal_path=.
-</span></span></code></pre></div><p>Initialize the database (required only on first startup)</p>
+</span></span></code></pre></div><p>Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span>
</span></span><span style="display:flex;"><span>bin/init-store.sh
</span></span></code></pre></div><p>Start server</p>
@@ -196,7 +197,7 @@ after the service is stopped artificially, or when the service needs to be start
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>#cassandra.keyspace.strategy=SimpleStrategy
</span></span><span style="display:flex;"><span>#cassandra.keyspace.replication=3
-</span></span></code></pre></div><p>Initialize the database (required only on first startup)</p>
+</span></span></code></pre></div><p>Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span>
</span></span><span style="display:flex;"><span>bin/init-store.sh
</span></span><span style="display:flex;"><span>Initing HugeGraph Store...
@@ -242,7 +243,7 @@ after the service is stopped artificially, or when the service needs to be start
</span></span><span style="display:flex;"><span>#cassandra.keyspace.strategy=SimpleStrategy
</span></span><span style="display:flex;"><span>#cassandra.keyspace.replication=3
</span></span></code></pre></div><p>Since the scylladb database itself is an “optimized version” based on cassandra, if the user does not have scylladb installed, they can also use cassandra as the backend storage directly. They only need to change the backend and serializer to scylladb, and the host and post point to the seeds and port of the cassandra cluster. Yes, but it is not recommended to do so, it will not take advantage of scylladb itself.</p>
-<p>Initialize the database (required only on first startup)</p>
+<p>Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span>
</span></span><span style="display:flex;"><span>bin/init-store.sh
</span></span></code></pre></div><p>Start server</p>
@@ -268,7 +269,7 @@ after the service is stopped artificially, or when the service needs to be start
</span></span><span style="display:flex;"><span>#hbase.enable_partition=true
</span></span><span style="display:flex;"><span>#hbase.vertex_partitions=10
</span></span><span style="display:flex;"><span>#hbase.edge_partitions=30
-</span></span></code></pre></div><p>Initialize the database (required only on first startup)</p>
+</span></span></code></pre></div><p>Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span>
</span></span><span style="display:flex;"><span>bin/init-store.sh
</span></span></code></pre></div><p>Start server</p>
@@ -300,7 +301,7 @@ after the service is stopped artificially, or when the service needs to be start
</span></span><span style="display:flex;"><span>jdbc.reconnect_max_times=3
</span></span><span style="display:flex;"><span>jdbc.reconnect_interval=3
</span></span><span style="display:flex;"><span>jdbc.ssl_mode=false
-</span></span></code></pre></div><p>Initialize the database (required only on first startup)</p>
+</span></span></code></pre></div><p>Initialize the database (required on first startup or a new configuration was manually added under ‘conf/graphs/’)</p>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#204a87">cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span>
</span></span><span style="display:flex;"><span>bin/init-store.sh
</span></span></code></pre></div><p>Start server</p>
@@ -400,7 +401,12 @@ after the service is stopped artificially, or when the service needs to be start
</span></span><span style="display:flex;"><span> ...
</span></span><span style="display:flex;"><span> ]
</span></span><span style="display:flex;"><span>}
-</span></span></code></pre></div><p>For detailed API, please refer to <a href="/docs/clients/restful-api">RESTful-API</a></p>
+</span></span></code></pre></div><p id="swaggerui-example"></p>
+<p>For detailed API, please refer to <a href="/docs/clients/restful-api">RESTful-API</a></p>
+<p>You can also visit <code>localhost:8080/swagger-ui/index.html</code> to check the API.</p>
+<div style="text-align: center;">
+<img src="/docs/images/images-server/621swaggerui示例.png" alt="image">
+</div>
<h3 id="7-stop-server">7 Stop Server</h3>
<div class="highlight"><pre tabindex="0" style="background-color:#f8f8f8;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-bash" data-lang="bash"><span style="display:flex;"><span><span style="color:#000">$cd</span> *hugegraph-<span style="color:#4e9a06">${</span><span style="color:#000">version</span><span style="color:#4e9a06">}</span>
</span></span><span style="display:flex;"><span><span style="color:#000">$bin</span>/stop-hugegraph.sh
@@ -1462,7 +1468,8 @@ And there is no need to guarantee the order between the two parameters.</p>
</ul>
<h4 id="21-use-docker-recommended">2.1 Use docker (recommended)</h4>
<blockquote>
-<p><strong>Special Note</strong>: If you are starting <code>hubble</code> with Docker, and <code>hubble</code> and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to <code>localhost/127.0.0.1</code>. This will refer to the <code>hubble</code> container internally rather than the host machine, resulting in a connection failure to the server. If <code>hubble</code> and <code>server</code> is in the same docker network, you can use the <code>container_name</code> as the hostname, and <code>8080</code> as the port. Or you can use the ip of the host as the hostname, and the port is configured by the host for the server.</p>
+<p><strong>Special Note</strong>: If you are starting <code>hubble</code> with Docker, and <code>hubble</code> and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to <code>localhost/127.0.0.1</code>. This will refer to the <code>hubble</code> container internally rather than the host machine, resulting in a connection failure to the server.</p>
+<p>If <code>hubble</code> and <code>server</code> is in the same docker network, we <strong>recommend</strong> using the <code>container_name</code> (in our example, it is <code>graph</code>) as the hostname, and <code>8080</code> as the port. Or you can use the <strong>host IP</strong> as the hostname, and the port is configured by the host for the server.</p>
</blockquote>
<p>We can use <code>docker run -itd --name=hubble -p 8088:8088 hugegraph/hubble</code> to quick start <a href="https://hub.docker.com/r/hugegraph/hubble">hubble</a>.</p>
<p>Alternatively, you can use Docker Compose to start <code>hubble</code>. Additionally, if <code>hubble</code> and the graph are in the same Docker network, you can access the graph using the container name of the graph, eliminating the need for the host machine’s IP address.</p>
@@ -1473,7 +1480,7 @@ And there is no need to guarantee the order between the two parameters.</p>
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">image</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">hugegraph/hugegraph</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">container_name</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">graph</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">ports</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline">
-</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">18080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline">
+</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span>- <span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#000;font-weight:bold">:</span><span style="color:#0000cf;font-weight:bold">8080</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">hubble</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline">
</span></span></span><span style="display:flex;"><span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#204a87;font-weight:bold">image</span><span style="color:#000;font-weight:bold">:</span><span style="color:#f8f8f8;text-decoration:underline"> </span><span style="color:#000">hugegraph/hubble</span><span style="color:#f8f8f8;text-decoration:underline">
@@ -1528,10 +1535,13 @@ And there is no need to guarantee the order between the two parameters.</p>
<div style="text-align: center;">
<img src="/docs/images/images-hubble/311图创建.png" alt="image">
</div>
-<p>Create graph by filling in the content as follows::</p>
+<p>Create graph by filling in the content as follows:</p>
<center>
<img src="/docs/images/images-hubble/311图创建2.png" alt="image">
</center>
+<blockquote>
+<p><strong>Special Note</strong>: If you are starting <code>hubble</code> with Docker, and <code>hubble</code> and the server are on the same host. When configuring the hostname for the graph on the Hubble web page, please do not directly set it to <code>localhost/127.0.0.1</code>. If <code>hubble</code> and <code>server</code> is in the same docker network, we <strong>recommend</strong> using the <code>container_name</code> (in our example, it is <code>graph</code>) as the hostname, and <code>8080</code> as the port. Or you can use the <strong>host IP</strong> as the hostname, and the port is configured by the host for the server.</p>
+</blockquote>
<h5 id="412graph-access">4.1.2 Graph Access</h5>
<p>Realize the information access of the graph space. After entering, you can perform operations such as multidimensional query analysis, metadata management, data import, and algorithm analysis of the graph.</p>
<center>
@@ -1621,7 +1631,7 @@ And there is no need to guarantee the order between the two parameters.</p>
<center>
<img src="/docs/images/images-hubble/3241边创建.png" alt="image">
</center>
-<p>Graph mode:</p>
+<p>Graph mode:</p>
<center>
<img src="/docs/images/images-hubble/3241边创建2.png" alt="image">
</center>
@@ -1638,6 +1648,9 @@ And there is no need to guarantee the order between the two parameters.</p>
<h5 id="425-index-types">4.2.5 Index Types</h5>
<p>Displays vertex and edge indices for vertex types and edge types.</p>
<h4 id="43-data-import">4.3 Data Import</h4>
+<blockquote>
+<p><strong>Note</strong>:currently, we recommend to use <a href="/en/docs/quickstart/hugegraph-loader">hugegraph-loader</a> to import data formally. The built-in import of <code>hubble</code> is used for <strong>testing</strong> and <strong>getting started</strong>.</p>
+</blockquote>
<p>The usage process of data import is as follows:</p>
<center>
<img src="/docs/images/images-hubble/33导入流程.png" alt="image">
diff --git a/en/sitemap.xml b/en/sitemap.xml
index 222c01562..2d7a3448f 100644
--- a/en/sitemap.xml
+++ b/en/sitemap.xml
@@ -1 +1 @@
-/docs/guides/architectural/2023-06-25T21:06:07+08:00/docs/config/config-guide/2023-09-19T14:14:13+08:00/docs/language/hugegraph-gremlin/2023-05-14T07:29:41-05:00/docs/contribution-guidelines/contribute/2023-09-09T20:50:32+08:00/docs/performance/hugegraph-benchmark-0.5.6/2023-05-14T22:31:02-05:00/docs/quickstart/hugegraph-server/2023-10-09T21:10:07+08:00/docs/introduction/readme/2023-06-18T14:57:33+08:00/docs/changelog/hugegraph-1.0.0-release-notes/2023-01-09T07:41:46+08:00/docs/clients/restful-api/2023-07-31T23:55:30+08:00/docs/clients/restful-api/schema/2023-05-14T19:35:13+08:00/docs/performance/api-preformance/hugegraph-api-0.5.6-rocksdb/2023-05-15T22:47:44-05:00/docs/config/config-option/2023-09-19T14:14:13+08:00/docs/guides/desgin-concept/2023-05-14T07:20:21-05:00/docs/download/download/2023-06-17T14:43:04+08:00/docs/language/hugegraph-example/2023-02-02T01:21:10+08:00/docs/clients/hugegraph-client/2023-01-01T16:16:43+08:00/docs/performance/api-preformance/2023-06-17T14:43:04+08:00/docs/quickstart/hugegraph-loader/2023-10-07T16:52:41+08:00/docs/clients/restful-api/propertykey/2023-05-19T05:15:56-05:00/docs/changelog/hugegraph-0.12.0-release-notes/2023-05-18T06:11:19-05:00/docs/contribution-guidelines/subscribe/2023-06-17T14:43:04+08:00/docs/performance/api-preformance/hugegraph-api-0.5.6-cassandra/2023-05-16T23:30:00-05:00/docs/config/config-authentication/2023-09-19T14:14:13+08:00/docs/clients/gremlin-console/2023-06-12T23:52:07+08:00/docs/guides/custom-plugin/2023-09-19T14:14:13+08:00/docs/performance/hugegraph-loader-performance/2023-05-18T00:34:48-05:00/docs/quickstart/2022-04-17T11:36:55+08:00/docs/contribution-guidelines/validate-release/2023-02-15T16:14:21+08:00/docs/clients/restful-api/vertexlabel/2023-05-19T04:03:23-05:00/docs/quickstart/hugegraph-hubble/2023-10-09T21:10:07+08:00/docs/guides/backup-restore/2023-05-14T07:26:12-05:00/docs/config/2022-04-17T11:36:55+08:00/docs/config/config-https/2023-05-19T05:04:16-05:00/docs/quickstart/hugegraph-client/2023-10-09T17:41:59+08:00/docs/clients/restful-api/edgelabel/2023-05-19T05:17:26-05:00/docs/contribution-guidelines/hugegraph-server-idea-setup/2023-06-25T21:06:07+08:00/docs/clients/2022-04-17T11:36:55+08:00/docs/config/config-computer/2023-01-01T16:16:43+08:00/docs/guides/faq/2023-05-14T07:28:41-05:00/docs/clients/restful-api/indexlabel/2023-05-19T05:18:17-05:00/docs/quickstart/hugegraph-tools/2023-10-09T17:41:59+08:00/docs/quickstart/hugegraph-computer/2023-10-09T17:41:59+08:00/docs/guides/2022-04-17T11:36:55+08:00/docs/clients/restful-api/rebuild/2022-05-09T18:43:53+08:00/docs/language/2022-04-17T11:36:55+08:00/docs/clients/restful-api/vertex/2023-06-04T23:04:47+08:00/docs/clients/restful-api/edge/2023-06-29T10:17:29+08:00/docs/performance/2022-04-17T11:36:55+08:00/docs/contribution-guidelines/2022-12-30T19:36:31+08:00/docs/clients/restful-api/traverser/2023-09-15T11:15:58+08:00/docs/changelog/2022-04-28T21:26:41+08:00/docs/clients/restful-api/rank/2022-09-15T12:59:59+08:00/docs/clients/restful-api/variable/2023-05-21T04:38:57-05:00/docs/clients/restful-api/graphs/2023-09-18T17:50:28+08:00/docs/clients/restful-api/task/2023-09-19T14:14:13+08:00/docs/clients/restful-api/gremlin/2023-05-21T04:39:11-05:00/docs/clients/restful-api/cypher/2023-07-31T23:55:30+08:00/docs/clients/restful-api/auth/2023-07-31T23:55:30+08:00/docs/clients/restful-api/other/2023-07-31T23:55:30+08:00/docs/2022-12-30T19:57:48+08:00/blog/news/2022-03-21T18:55:33+08:00/blog/releases/2022-03-21T18:55:33+08:00/blog/2018/10/06/easy-documentation-with-docsy/2022-03-21T18:55:33+08:00/blog/2018/10/06/the-second-blog-post/2022-03-21T18:55:33+08:00/blog/2018/01/04/another-great-release/2022-03-21T18:55:33+08:00/docs/cla/2022-03-21T19:51:14+08:00/docs/performance/hugegraph-benchmark-0.4.4/2022-09-15T12:59:59+08:00/docs/summary/2023-10-09T17:41:59+08:00/blog/2022-03-21T18:55:33+08:00/categories//community/2022-03-21T18:55:33+08:00/2023-01-15T13:44:01+00:00/search/2022-03-21T18:55:33+08:00/tags/
\ No newline at end of file
+/docs/guides/architectural/2023-06-25T21:06:07+08:00/docs/config/config-guide/2023-11-01T21:52:52+08:00/docs/language/hugegraph-gremlin/2023-05-14T07:29:41-05:00/docs/contribution-guidelines/contribute/2023-09-09T20:50:32+08:00/docs/performance/hugegraph-benchmark-0.5.6/2023-05-14T22:31:02-05:00/docs/quickstart/hugegraph-server/2023-11-01T21:52:52+08:00/docs/introduction/readme/2023-06-18T14:57:33+08:00/docs/changelog/hugegraph-1.0.0-release-notes/2023-01-09T07:41:46+08:00/docs/clients/restful-api/2023-11-01T21:52:52+08:00/docs/clients/restful-api/schema/2023-05-14T19:35:13+08:00/docs/performance/api-preformance/hugegraph-api-0.5.6-rocksdb/2023-05-15T22:47:44-05:00/docs/config/config-option/2023-09-19T14:14:13+08:00/docs/guides/desgin-concept/2023-05-14T07:20:21-05:00/docs/download/download/2023-06-17T14:43:04+08:00/docs/language/hugegraph-example/2023-02-02T01:21:10+08:00/docs/clients/hugegraph-client/2023-01-01T16:16:43+08:00/docs/performance/api-preformance/2023-06-17T14:43:04+08:00/docs/quickstart/hugegraph-loader/2023-10-07T16:52:41+08:00/docs/clients/restful-api/propertykey/2023-05-19T05:15:56-05:00/docs/changelog/hugegraph-0.12.0-release-notes/2023-05-18T06:11:19-05:00/docs/contribution-guidelines/subscribe/2023-06-17T14:43:04+08:00/docs/performance/api-preformance/hugegraph-api-0.5.6-cassandra/2023-05-16T23:30:00-05:00/docs/config/config-authentication/2023-09-19T14:14:13+08:00/docs/clients/gremlin-console/2023-06-12T23:52:07+08:00/docs/guides/custom-plugin/2023-09-19T14:14:13+08:00/docs/performance/hugegraph-loader-performance/2023-05-18T00:34:48-05:00/docs/quickstart/2022-04-17T11:36:55+08:00/docs/contribution-guidelines/validate-release/2023-02-15T16:14:21+08:00/docs/clients/restful-api/vertexlabel/2023-05-19T04:03:23-05:00/docs/quickstart/hugegraph-hubble/2023-11-01T21:52:52+08:00/docs/guides/backup-restore/2023-05-14T07:26:12-05:00/docs/config/2022-04-17T11:36:55+08:00/docs/config/config-https/2023-05-19T05:04:16-05:00/docs/quickstart/hugegraph-client/2023-10-09T17:41:59+08:00/docs/clients/restful-api/edgelabel/2023-05-19T05:17:26-05:00/docs/contribution-guidelines/hugegraph-server-idea-setup/2023-06-25T21:06:07+08:00/docs/clients/2022-04-17T11:36:55+08:00/docs/config/config-computer/2023-01-01T16:16:43+08:00/docs/guides/faq/2023-05-14T07:28:41-05:00/docs/clients/restful-api/indexlabel/2023-05-19T05:18:17-05:00/docs/quickstart/hugegraph-tools/2023-10-09T17:41:59+08:00/docs/quickstart/hugegraph-computer/2023-10-09T17:41:59+08:00/docs/guides/2022-04-17T11:36:55+08:00/docs/clients/restful-api/rebuild/2022-05-09T18:43:53+08:00/docs/language/2022-04-17T11:36:55+08:00/docs/clients/restful-api/vertex/2023-06-04T23:04:47+08:00/docs/clients/restful-api/edge/2023-06-29T10:17:29+08:00/docs/performance/2022-04-17T11:36:55+08:00/docs/contribution-guidelines/2022-12-30T19:36:31+08:00/docs/clients/restful-api/traverser/2023-09-15T11:15:58+08:00/docs/changelog/2022-04-28T21:26:41+08:00/docs/clients/restful-api/rank/2022-09-15T12:59:59+08:00/docs/clients/restful-api/variable/2023-05-21T04:38:57-05:00/docs/clients/restful-api/graphs/2023-09-18T17:50:28+08:00/docs/clients/restful-api/task/2023-09-19T14:14:13+08:00/docs/clients/restful-api/gremlin/2023-05-21T04:39:11-05:00/docs/clients/restful-api/cypher/2023-07-31T23:55:30+08:00/docs/clients/restful-api/auth/2023-07-31T23:55:30+08:00/docs/clients/restful-api/other/2023-07-31T23:55:30+08:00/docs/2022-12-30T19:57:48+08:00/blog/news/2022-03-21T18:55:33+08:00/blog/releases/2022-03-21T18:55:33+08:00/blog/2018/10/06/easy-documentation-with-docsy/2022-03-21T18:55:33+08:00/blog/2018/10/06/the-second-blog-post/2022-03-21T18:55:33+08:00/blog/2018/01/04/another-great-release/2022-03-21T18:55:33+08:00/docs/cla/2022-03-21T19:51:14+08:00/docs/performance/hugegraph-benchmark-0.4.4/2022-09-15T12:59:59+08:00/docs/summary/2023-10-09T17:41:59+08:00/blog/2022-03-21T18:55:33+08:00/categories//community/2022-03-21T18:55:33+08:00/2023-01-15T13:44:01+00:00/search/2022-03-21T18:55:33+08:00/tags/
\ No newline at end of file
diff --git a/sitemap.xml b/sitemap.xml
index fe106a319..2f5904c4e 100644
--- a/sitemap.xml
+++ b/sitemap.xml
@@ -1 +1 @@
-/en/sitemap.xml2023-10-09T21:10:07+08:00/cn/sitemap.xml2023-10-09T21:10:07+08:00
\ No newline at end of file
+/en/sitemap.xml2023-11-01T21:52:52+08:00/cn/sitemap.xml2023-11-01T21:52:52+08:00
\ No newline at end of file