From 62af8a464d3f17d18e0342774aba1357322896f3 Mon Sep 17 00:00:00 2001 From: cooper-lzy <78672629+cooper-lzy@users.noreply.github.com> Date: Sun, 18 Feb 2024 11:00:35 +0800 Subject: [PATCH 1/3] update export tools --- docs-2.0-en/import-export/write-tools.md | 14 ++++++++------ docs-2.0-zh/import-export/write-tools.md | 23 ++++++++++++++++------- 2 files changed, 24 insertions(+), 13 deletions(-) diff --git a/docs-2.0-en/import-export/write-tools.md b/docs-2.0-en/import-export/write-tools.md index 616df169ad9..0dfbd53d911 100644 --- a/docs-2.0-en/import-export/write-tools.md +++ b/docs-2.0-en/import-export/write-tools.md @@ -5,19 +5,21 @@ There are many ways to write NebulaGraph {{ nebula.release }}: - Import with [the command -f](../2.quick-start/3.connect-to-nebula-graph.md): This method imports a small number of prepared nGQL files, which is suitable to prepare for a small amount of manual test data. -- Import with [Studio](../nebula-studio/quick-start/st-ug-import-data.md): This method uses a browser to import multiple csv files of this machine. A single file cannot exceed 100 MB, and its format is limited. -- Import with [Importer](use-importer.md): This method imports multiple csv files on a single machine with unlimited size and flexible format. +- Import with [Studio](../nebula-studio/quick-start/st-ug-import-data.md): This method uses a browser to import multiple CSV files of this machine. A single file cannot exceed 100 MB, and its format is limited. +- Import with [Importer](use-importer.md): This method imports multiple CSV files on a single machine with unlimited size and flexible format. - Import with [Exchange](nebula-exchange/about-exchange/ex-ug-what-is-exchange.md): This method imports from various distribution sources, such as Neo4j, Hive, MySQL, etc., which requires a Spark cluster. -- Import with [Spark-connector](../connector/nebula-spark-connector.md)/[Flink-connector](../connector/nebula-flink-connector.md): This method has corresponding components (Spark/Flink) and writes a small amount of code. +- Read and write APIs with [Spark-connector](../connector/nebula-spark-connector.md)/[Flink-connector](../connector/nebula-flink-connector.md): This method has corresponding components (Spark/Flink) and writes a small amount of code. - Import with [C++/GO/Java/Python SDK](../20.appendix/6.eco-tool-version.md): This method imports in the way of writing programs, which requires certain programming and tuning skills. The following figure shows the positions of these ways: ![image](https://docs-cdn.nebula-graph.com.cn/figures/write-choice.png) - ## Export tools -!!! enterpriseonly +- Read and write APIs with [Spark-connector](../connector/nebula-spark-connector.md)/[Flink-connector](../connector/nebula-flink-connector.md): This method has corresponding components (Spark/Flink) and writes a small amount of code. +- Export the data in database to a CSV file or another graph space (different NebulaGraph database clusters are supported) using the export function of the Exchange. + + !!! enterpriseonly - The export tool is exclusively available in the Enterprise Edition. If you require access to this version, please [contact us](mailto:inquiry@vesoft.com). + The export function is exclusively available in the Enterprise Edition. If you require access to this version, please [contact us](mailto:inquiry@vesoft.com). diff --git a/docs-2.0-zh/import-export/write-tools.md b/docs-2.0-zh/import-export/write-tools.md index f809e225a59..203ed0484fa 100644 --- a/docs-2.0-zh/import-export/write-tools.md +++ b/docs-2.0-zh/import-export/write-tools.md @@ -1,18 +1,27 @@ -# 导入工具选择 +# 导入导出工具概述 +## 导入工具 -有多种方式可以将数据写入 {{nebula.name}} {{ nebula.release }}: +有多种方式可以将数据写入{{nebula.name}} {{ nebula.release }}: -- 使用[命令行 -f 的方式](../2.quick-start/3.quick-start-on-premise/3.connect-to-nebula-graph.md)导入:可以导入少量准备好的 nGQL 文件,适合少量手工测试数据准备; +- 使用[命令行 -f 的方式](../2.quick-start/3.quick-start-on-premise/3.connect-to-nebula-graph.md)导入:可以导入少量准备好的 nGQL 文件,适合少量手工测试数据准备。 -- 使用 [Studio 导入](../nebula-studio/quick-start/st-ug-import-data.md):可以用过浏览器导入本机多个 csv 文件,格式有限制; +- 使用 [Studio 导入](../nebula-studio/quick-start/st-ug-import-data.md):可以用过浏览器导入本机多个 CSV 文件,格式有限制。 -- 使用 [Importer 导入](use-importer.md):导入单机多个 csv 文件,大小没有限制,格式灵活;数据量十亿级以内; -- 使用 [Exchange 导入](nebula-exchange/about-exchange/ex-ug-what-is-exchange.md):从 Neo4j, Hive, MySQL 等多种源分布式导入,需要有 Spark 集群;数据量十亿级以上 -- 使用 [Spark-connector](../connector/nebula-spark-connector.md)/[Flink-connector](../connector/nebula-flink-connector.md) 导入:有相应组件 (Spark/Flink),撰写少量代码; +- 使用 [Importer 导入](use-importer.md):导入单机多个 CSV 文件,大小没有限制,格式灵活。适合数据量十亿级以内场景。 +- 使用 [Exchange 导入](nebula-exchange/about-exchange/ex-ug-what-is-exchange.md):从 Neo4j、Hive、MySQL 等多种源分布式导入,需要有 Spark 集群。适合数据量十亿级以上场景。 +- 使用 [Spark-connector](nebula-spark-connector.md)/[Flink-connector](nebula-flink-connector.md) 读写 API:有相应组件 (Spark/Flink),撰写少量代码即可。 - 使用 [C++/GO/Java/Python SDK](../20.appendix/6.eco-tool-version.md):编写程序的方式导入,需要有一定编程和调优能力。 下图给出了几种方式的定位: ![image](https://docs-cdn.nebula-graph.com.cn/figures/write-choice.png) +## 导出工具 + +- 使用 [Spark-connector](nebula-spark-connector.md)/[Flink-connector](nebula-flink-connector.md) 读写 API:有相应组件 (Spark/Flink),撰写少量代码即可。 +- 使用 Exchange 导出功能将数据导出至 CSV 文件或另一个图空间(支持不同 {{nebula.name}} 集群)中。 + + !!! enterpriseonly + + 仅企业版 Exchange 提供导出功能。如需企业版,请[联系我们](https://discuss-cdn.nebula-graph.com.cn/uploads/default/original/3X/d/1/d1e1b0e55e29776ee60e3f34c843474ec884393d.jpeg)。 \ No newline at end of file From 199abd861d0dbf7e6822dbee7106e76d2f21cf86 Mon Sep 17 00:00:00 2001 From: cooper-lzy <78672629+cooper-lzy@users.noreply.github.com> Date: Sun, 18 Feb 2024 13:47:27 +0800 Subject: [PATCH 2/3] update --- docs-2.0-en/import-export/write-tools.md | 6 +++--- docs-2.0-zh/import-export/write-tools.md | 8 ++++---- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/docs-2.0-en/import-export/write-tools.md b/docs-2.0-en/import-export/write-tools.md index 0dfbd53d911..606efe97c09 100644 --- a/docs-2.0-en/import-export/write-tools.md +++ b/docs-2.0-en/import-export/write-tools.md @@ -6,9 +6,9 @@ There are many ways to write NebulaGraph {{ nebula.release }}: - Import with [the command -f](../2.quick-start/3.connect-to-nebula-graph.md): This method imports a small number of prepared nGQL files, which is suitable to prepare for a small amount of manual test data. - Import with [Studio](../nebula-studio/quick-start/st-ug-import-data.md): This method uses a browser to import multiple CSV files of this machine. A single file cannot exceed 100 MB, and its format is limited. -- Import with [Importer](use-importer.md): This method imports multiple CSV files on a single machine with unlimited size and flexible format. +- Import with [Importer](use-importer.md): This method imports multiple CSV files on a single machine with unlimited size and flexible format. Suitable for scenarios with less than one billion records of data. - Import with [Exchange](nebula-exchange/about-exchange/ex-ug-what-is-exchange.md): This method imports from various distribution sources, such as Neo4j, Hive, MySQL, etc., which requires a Spark cluster. -- Read and write APIs with [Spark-connector](../connector/nebula-spark-connector.md)/[Flink-connector](../connector/nebula-flink-connector.md): This method has corresponding components (Spark/Flink) and writes a small amount of code. +- Read and write APIs with [Spark-connector](../connector/nebula-spark-connector.md)/[Flink-connector](../connector/nebula-flink-connector.md): This method requires you to write a small amount of code to make use of the APIs provided by Spark/Flink connector. Suitable for scenarios with more than one billion records of data. - Import with [C++/GO/Java/Python SDK](../20.appendix/6.eco-tool-version.md): This method imports in the way of writing programs, which requires certain programming and tuning skills. The following figure shows the positions of these ways: @@ -17,7 +17,7 @@ The following figure shows the positions of these ways: ## Export tools -- Read and write APIs with [Spark-connector](../connector/nebula-spark-connector.md)/[Flink-connector](../connector/nebula-flink-connector.md): This method has corresponding components (Spark/Flink) and writes a small amount of code. +- Read and write APIs with [Spark-connector](../connector/nebula-spark-connector.md)/[Flink-connector](../connector/nebula-flink-connector.md): This method requires you to write a small amount of code to make use of the APIs provided by Spark/Flink connector. - Export the data in database to a CSV file or another graph space (different NebulaGraph database clusters are supported) using the export function of the Exchange. !!! enterpriseonly diff --git a/docs-2.0-zh/import-export/write-tools.md b/docs-2.0-zh/import-export/write-tools.md index 203ed0484fa..d92586d84c2 100644 --- a/docs-2.0-zh/import-export/write-tools.md +++ b/docs-2.0-zh/import-export/write-tools.md @@ -8,9 +8,9 @@ - 使用 [Studio 导入](../nebula-studio/quick-start/st-ug-import-data.md):可以用过浏览器导入本机多个 CSV 文件,格式有限制。 -- 使用 [Importer 导入](use-importer.md):导入单机多个 CSV 文件,大小没有限制,格式灵活。适合数据量十亿级以内场景。 -- 使用 [Exchange 导入](nebula-exchange/about-exchange/ex-ug-what-is-exchange.md):从 Neo4j、Hive、MySQL 等多种源分布式导入,需要有 Spark 集群。适合数据量十亿级以上场景。 -- 使用 [Spark-connector](nebula-spark-connector.md)/[Flink-connector](nebula-flink-connector.md) 读写 API:有相应组件 (Spark/Flink),撰写少量代码即可。 +- 使用 [Importer 导入](use-importer.md):导入单机多个 CSV 文件,大小没有限制,格式灵活。适合十亿条数据以内的场景。 +- 使用 [Exchange 导入](nebula-exchange/about-exchange/ex-ug-what-is-exchange.md):从 Neo4j、Hive、MySQL 等多种源分布式导入,需要有 Spark 集群。适合十亿条数据以上的场景。 +- 使用 [Spark-connector](nebula-spark-connector.md)/[Flink-connector](nebula-flink-connector.md) 读写 API:这种方式需要编写少量代码来使用 Spark/Flink 连接器提供的 API。 - 使用 [C++/GO/Java/Python SDK](../20.appendix/6.eco-tool-version.md):编写程序的方式导入,需要有一定编程和调优能力。 下图给出了几种方式的定位: @@ -19,7 +19,7 @@ ## 导出工具 -- 使用 [Spark-connector](nebula-spark-connector.md)/[Flink-connector](nebula-flink-connector.md) 读写 API:有相应组件 (Spark/Flink),撰写少量代码即可。 +- 使用 [Spark-connector](nebula-spark-connector.md)/[Flink-connector](nebula-flink-connector.md) 读写 API:这种方式需要编写少量代码来使用 Spark/Flink 连接器提供的 API。 - 使用 Exchange 导出功能将数据导出至 CSV 文件或另一个图空间(支持不同 {{nebula.name}} 集群)中。 !!! enterpriseonly From e4f202d3c5e031c7c8bddae8d3f3dfa4650dcf10 Mon Sep 17 00:00:00 2001 From: cooper-lzy <78672629+cooper-lzy@users.noreply.github.com> Date: Sun, 18 Feb 2024 13:56:55 +0800 Subject: [PATCH 3/3] Update write-tools.md --- docs-2.0-en/import-export/write-tools.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs-2.0-en/import-export/write-tools.md b/docs-2.0-en/import-export/write-tools.md index 606efe97c09..2566d050998 100644 --- a/docs-2.0-en/import-export/write-tools.md +++ b/docs-2.0-en/import-export/write-tools.md @@ -7,8 +7,8 @@ There are many ways to write NebulaGraph {{ nebula.release }}: - Import with [the command -f](../2.quick-start/3.connect-to-nebula-graph.md): This method imports a small number of prepared nGQL files, which is suitable to prepare for a small amount of manual test data. - Import with [Studio](../nebula-studio/quick-start/st-ug-import-data.md): This method uses a browser to import multiple CSV files of this machine. A single file cannot exceed 100 MB, and its format is limited. - Import with [Importer](use-importer.md): This method imports multiple CSV files on a single machine with unlimited size and flexible format. Suitable for scenarios with less than one billion records of data. -- Import with [Exchange](nebula-exchange/about-exchange/ex-ug-what-is-exchange.md): This method imports from various distribution sources, such as Neo4j, Hive, MySQL, etc., which requires a Spark cluster. -- Read and write APIs with [Spark-connector](../connector/nebula-spark-connector.md)/[Flink-connector](../connector/nebula-flink-connector.md): This method requires you to write a small amount of code to make use of the APIs provided by Spark/Flink connector. Suitable for scenarios with more than one billion records of data. +- Import with [Exchange](nebula-exchange/about-exchange/ex-ug-what-is-exchange.md): This method imports from various distribution sources, such as Neo4j, Hive, MySQL, etc., which requires a Spark cluster. Suitable for scenarios with more than one billion records of data. +- Read and write APIs with [Spark-connector](../connector/nebula-spark-connector.md)/[Flink-connector](../connector/nebula-flink-connector.md): This method requires you to write a small amount of code to make use of the APIs provided by Spark/Flink connector. - Import with [C++/GO/Java/Python SDK](../20.appendix/6.eco-tool-version.md): This method imports in the way of writing programs, which requires certain programming and tuning skills. The following figure shows the positions of these ways: