Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update export tools #2463

Merged
merged 3 commits into from
Feb 18, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 9 additions & 7 deletions docs-2.0-en/import-export/write-tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,19 +5,21 @@
There are many ways to write NebulaGraph {{ nebula.release }}:

- Import with [the command -f](../2.quick-start/3.connect-to-nebula-graph.md): This method imports a small number of prepared nGQL files, which is suitable to prepare for a small amount of manual test data.
- Import with [Studio](../nebula-studio/quick-start/st-ug-import-data.md): This method uses a browser to import multiple csv files of this machine. A single file cannot exceed 100 MB, and its format is limited.
- Import with [Importer](use-importer.md): This method imports multiple csv files on a single machine with unlimited size and flexible format.
- Import with [Exchange](nebula-exchange/about-exchange/ex-ug-what-is-exchange.md): This method imports from various distribution sources, such as Neo4j, Hive, MySQL, etc., which requires a Spark cluster.
- Import with [Spark-connector](../connector/nebula-spark-connector.md)/[Flink-connector](../connector/nebula-flink-connector.md): This method has corresponding components (Spark/Flink) and writes a small amount of code.
- Import with [Studio](../nebula-studio/quick-start/st-ug-import-data.md): This method uses a browser to import multiple CSV files of this machine. A single file cannot exceed 100 MB, and its format is limited.
- Import with [Importer](use-importer.md): This method imports multiple CSV files on a single machine with unlimited size and flexible format. Suitable for scenarios with less than one billion records of data.
- Import with [Exchange](nebula-exchange/about-exchange/ex-ug-what-is-exchange.md): This method imports from various distribution sources, such as Neo4j, Hive, MySQL, etc., which requires a Spark cluster. Suitable for scenarios with more than one billion records of data.
- Read and write APIs with [Spark-connector](../connector/nebula-spark-connector.md)/[Flink-connector](../connector/nebula-flink-connector.md): This method requires you to write a small amount of code to make use of the APIs provided by Spark/Flink connector.
- Import with [C++/GO/Java/Python SDK](../20.appendix/6.eco-tool-version.md): This method imports in the way of writing programs, which requires certain programming and tuning skills.

The following figure shows the positions of these ways:

![image](https://docs-cdn.nebula-graph.com.cn/figures/write-choice.png)


## Export tools

!!! enterpriseonly
- Read and write APIs with [Spark-connector](../connector/nebula-spark-connector.md)/[Flink-connector](../connector/nebula-flink-connector.md): This method requires you to write a small amount of code to make use of the APIs provided by Spark/Flink connector.
- Export the data in database to a CSV file or another graph space (different NebulaGraph database clusters are supported) using the export function of the Exchange.

!!! enterpriseonly

The export tool is exclusively available in the Enterprise Edition. If you require access to this version, please [contact us](mailto:[email protected]).
The export function is exclusively available in the Enterprise Edition. If you require access to this version, please [contact us](mailto:[email protected]).
23 changes: 16 additions & 7 deletions docs-2.0-zh/import-export/write-tools.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,27 @@
# 导入工具选择
# 导入导出工具概述

## 导入工具

有多种方式可以将数据写入 {{nebula.name}} {{ nebula.release }}:
有多种方式可以将数据写入{{nebula.name}} {{ nebula.release }}:

- 使用[命令行 -f 的方式](../2.quick-start/3.quick-start-on-premise/3.connect-to-nebula-graph.md)导入:可以导入少量准备好的 nGQL 文件,适合少量手工测试数据准备
- 使用[命令行 -f 的方式](../2.quick-start/3.quick-start-on-premise/3.connect-to-nebula-graph.md)导入:可以导入少量准备好的 nGQL 文件,适合少量手工测试数据准备

- 使用 [Studio 导入](../nebula-studio/quick-start/st-ug-import-data.md):可以用过浏览器导入本机多个 csv 文件,格式有限制
- 使用 [Studio 导入](../nebula-studio/quick-start/st-ug-import-data.md):可以用过浏览器导入本机多个 CSV 文件,格式有限制

- 使用 [Importer 导入](use-importer.md):导入单机多个 csv 文件,大小没有限制,格式灵活;数据量十亿级以内;
- 使用 [Exchange 导入](nebula-exchange/about-exchange/ex-ug-what-is-exchange.md):从 Neo4j, Hive, MySQL 等多种源分布式导入,需要有 Spark 集群;数据量十亿级以上
- 使用 [Spark-connector](../connector/nebula-spark-connector.md)/[Flink-connector](../connector/nebula-flink-connector.md) 导入:有相应组件 (Spark/Flink),撰写少量代码;
- 使用 [Importer 导入](use-importer.md):导入单机多个 CSV 文件,大小没有限制,格式灵活。适合十亿条数据以内的场景。
- 使用 [Exchange 导入](nebula-exchange/about-exchange/ex-ug-what-is-exchange.md):从 Neo4jHiveMySQL 等多种源分布式导入,需要有 Spark 集群。适合十亿条数据以上的场景。
- 使用 [Spark-connector](nebula-spark-connector.md)/[Flink-connector](nebula-flink-connector.md) 读写 API:这种方式需要编写少量代码来使用 Spark/Flink 连接器提供的 API。
- 使用 [C++/GO/Java/Python SDK](../20.appendix/6.eco-tool-version.md):编写程序的方式导入,需要有一定编程和调优能力。

下图给出了几种方式的定位:

![image](https://docs-cdn.nebula-graph.com.cn/figures/write-choice.png)

## 导出工具

- 使用 [Spark-connector](nebula-spark-connector.md)/[Flink-connector](nebula-flink-connector.md) 读写 API:这种方式需要编写少量代码来使用 Spark/Flink 连接器提供的 API。
- 使用 Exchange 导出功能将数据导出至 CSV 文件或另一个图空间(支持不同 {{nebula.name}} 集群)中。

!!! enterpriseonly

仅企业版 Exchange 提供导出功能。如需企业版,请[联系我们](https://discuss-cdn.nebula-graph.com.cn/uploads/default/original/3X/d/1/d1e1b0e55e29776ee60e3f34c843474ec884393d.jpeg)。