diff --git a/404.html b/404.html index a0d17a74c..7709bcba0 100644 --- a/404.html +++ b/404.html @@ -1,5 +1,5 @@ 404 Page not found | HugeGraph -

Not found

Oops! This page doesn't exist. Try going back to our home page.

You can learn how to make a 404 page like this in Custom 404 Pages.

+

Not found

Oops! This page doesn't exist. Try going back to our home page.

You can learn how to make a 404 page like this in Custom 404 Pages.

diff --git a/about/_print/index.html b/about/_print/index.html index 0fa47ca8a..8058eeb39 100644 --- a/about/_print/index.html +++ b/about/_print/index.html @@ -16,7 +16,7 @@ "> -

About Apache HugeGraph

A sample site using the Docsy Hugo theme.

Goldydocs is a sample site using the Docsy Hugo theme that shows what it can do and provides you with a template site structure. It’s designed for you to clone and edit as much as you like. See the different sections of the documentation and site for more ideas.

This is another section

This is another section

+

About Apache HugeGraph

A sample site using the Docsy Hugo theme.

Goldydocs is a sample site using the Docsy Hugo theme that shows what it can do and provides you with a template site structure. It’s designed for you to clone and edit as much as you like. See the different sections of the documentation and site for more ideas.

This is another section

This is another section

diff --git a/about/index.html b/about/index.html index 30f41e7e4..9ed8975e9 100644 --- a/about/index.html +++ b/about/index.html @@ -16,7 +16,7 @@ "> -

About Apache HugeGraph

A sample site using the Docsy Hugo theme.

Goldydocs is a sample site using the Docsy Hugo theme that shows what it can do and provides you with a template site structure. It’s designed for you to clone and edit as much as you like. See the different sections of the documentation and site for more ideas.

This is another section

This is another section

+

About Apache HugeGraph

A sample site using the Docsy Hugo theme.

Goldydocs is a sample site using the Docsy Hugo theme that shows what it can do and provides you with a template site structure. It’s designed for you to clone and edit as much as you like. See the different sections of the documentation and site for more ideas.

This is another section

This is another section

diff --git a/blog/2018/01/04/another-great-release/index.html b/blog/2018/01/04/another-great-release/index.html index 9a1515720..cd0a2ba3d 100644 --- a/blog/2018/01/04/another-great-release/index.html +++ b/blog/2018/01/04/another-great-release/index.html @@ -29,7 +29,7 @@ }
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
 

Inline code inside table cells should still be distinguishable.

LanguageCode
Javascriptvar foo = "bar";
Rubyfoo = "bar"{

Small images should be shown at their actual size.

Large images should always scale down and fit in the content container.

Components

Alerts

Sizing

Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

Parameters available

Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

Using pixels

Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

Using rem

Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

Memory

Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

RAM to use

Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

More is better

Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

Used RAM

Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

This is the final element on the page and there should be no margin below this.
-
+ diff --git a/blog/2018/10/06/easy-documentation-with-docsy/index.html b/blog/2018/10/06/easy-documentation-with-docsy/index.html index 90d8d9be1..b9092e10d 100644 --- a/blog/2018/10/06/easy-documentation-with-docsy/index.html +++ b/blog/2018/10/06/easy-documentation-with-docsy/index.html @@ -28,7 +28,7 @@ -

The image will be rendered at the size and byline specified in the front matter.

+

The image will be rendered at the size and byline specified in the front matter.

diff --git a/blog/2018/10/06/the-second-blog-post/index.html b/blog/2018/10/06/the-second-blog-post/index.html index 7d89aafc3..6cea86c4c 100644 --- a/blog/2018/10/06/the-second-blog-post/index.html +++ b/blog/2018/10/06/the-second-blog-post/index.html @@ -29,7 +29,7 @@ }
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
 

Inline code inside table cells should still be distinguishable.

LanguageCode
Javascriptvar foo = "bar";
Rubyfoo = "bar"{

Small images should be shown at their actual size.

Large images should always scale down and fit in the content container.

Components

Alerts

Sizing

Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

Parameters available

Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

Using pixels

Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

Using rem

Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

Memory

Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

RAM to use

Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

More is better

Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

Used RAM

Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

This is the final element on the page and there should be no margin below this.
-
+ diff --git a/blog/_print/index.html b/blog/_print/index.html index 714f6e21f..7145eadce 100644 --- a/blog/_print/index.html +++ b/blog/_print/index.html @@ -68,7 +68,7 @@ }
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
 

Inline code inside table cells should still be distinguishable.

LanguageCode
Javascriptvar foo = "bar";
Rubyfoo = "bar"{

Small images should be shown at their actual size.

Large images should always scale down and fit in the content container.

Components

Alerts

Sizing

Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

Parameters available

Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

Using pixels

Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

Using rem

Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

Memory

Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

RAM to use

Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

More is better

Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

Used RAM

Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

This is the final element on the page and there should be no margin below this.
-
+ diff --git a/blog/index.html b/blog/index.html index 8235d01c7..cf21db7ea 100644 --- a/blog/index.html +++ b/blog/index.html @@ -10,7 +10,7 @@ There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. …

Read more

  • Announcing Docsy

    Saturday, October 06, 2018 in News

    Featured Image for Easy documentation with Docsy
    Photo: Riona MacNamara / CC-BY-CA

    This is a typical blog post that includes images. The front matter specifies the date of the blog post, its title, a short description that will be displayed on the blog landing page, and its author. Including images Here’s an image …

    Read more

  • Release New Features

    Thursday, January 04, 2018 in Releases

    Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over). -There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. …

    Read more

  • +There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. …

    Read more

    diff --git a/blog/news/_print/index.html b/blog/news/_print/index.html index f0a34f283..de81d455e 100644 --- a/blog/news/_print/index.html +++ b/blog/news/_print/index.html @@ -45,7 +45,7 @@ }
    Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
     

    Inline code inside table cells should still be distinguishable.

    LanguageCode
    Javascriptvar foo = "bar";
    Rubyfoo = "bar"{

    Small images should be shown at their actual size.

    Large images should always scale down and fit in the content container.

    Components

    Alerts

    Sizing

    Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Parameters available

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Using pixels

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Using rem

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Memory

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    RAM to use

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    More is better

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Used RAM

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    This is the final element on the page and there should be no margin below this.
    -
    + diff --git a/blog/news/index.html b/blog/news/index.html index 1ed2a5a82..e577ecd07 100644 --- a/blog/news/index.html +++ b/blog/news/index.html @@ -7,7 +7,7 @@ Print entire section
    RSS

    Posts in 2018

    • Second blog post

      Saturday, October 06, 2018 in News

      Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over). There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. …

      Read more

    • Announcing Docsy

      Saturday, October 06, 2018 in News

      Featured Image for Easy documentation with Docsy
      Photo: Riona MacNamara / CC-BY-CA

      This is a typical blog post that includes images. The front matter specifies the date of the blog post, its title, a short description that will be displayed on the blog landing page, and its author. -Including images Here’s an image …

      Read more

    +Including images Here’s an image …

    Read more

    diff --git a/blog/releases/_print/index.html b/blog/releases/_print/index.html index 26a146e57..6133be176 100644 --- a/blog/releases/_print/index.html +++ b/blog/releases/_print/index.html @@ -21,7 +21,7 @@ }
    Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
     

    Inline code inside table cells should still be distinguishable.

    LanguageCode
    Javascriptvar foo = "bar";
    Rubyfoo = "bar"{

    Small images should be shown at their actual size.

    Large images should always scale down and fit in the content container.

    Components

    Alerts

    Sizing

    Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Parameters available

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Using pixels

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Using rem

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Memory

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    RAM to use

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    More is better

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Used RAM

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    This is the final element on the page and there should be no margin below this.
    -
    + diff --git a/blog/releases/index.html b/blog/releases/index.html index bd7aa6216..dbef7b23c 100644 --- a/blog/releases/index.html +++ b/blog/releases/index.html @@ -5,7 +5,7 @@ Create documentation issue Create project issue Print entire section
    RSS

    Posts in 2018

    • Release New Features

      Thursday, January 04, 2018 in Releases

      Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over). -There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. …

      Read more

    +There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. …

    Read more

    diff --git a/categories/index.html b/categories/index.html index 11ec03f8e..33c080bc7 100644 --- a/categories/index.html +++ b/categories/index.html @@ -1,5 +1,5 @@ Categories | HugeGraph -

    Categories

    +

    Categories

    diff --git a/cn/404.html b/cn/404.html index ef7d5792d..6250e5ae5 100644 --- a/cn/404.html +++ b/cn/404.html @@ -1,5 +1,5 @@ 404 Page not found | HugeGraph -

    Not found

    Oops! This page doesn't exist. Try going back to our home page.

    You can learn how to make a 404 page like this in Custom 404 Pages.

    +

    Not found

    Oops! This page doesn't exist. Try going back to our home page.

    You can learn how to make a 404 page like this in Custom 404 Pages.

    diff --git a/cn/about/_print/index.html b/cn/about/_print/index.html index 0e02f4b91..fdc1940a4 100644 --- a/cn/about/_print/index.html +++ b/cn/about/_print/index.html @@ -16,7 +16,7 @@ "> -

    About Goldydocs

    A sample site using the Docsy Hugo theme.

    Goldydocs is a sample site using the Docsy Hugo theme that shows what it can do and provides you with a template site structure. It’s designed for you to clone and edit as much as you like. See the different sections of the documentation and site for more ideas.

    This is another section

    This is another section

    +

    About Goldydocs

    A sample site using the Docsy Hugo theme.

    Goldydocs is a sample site using the Docsy Hugo theme that shows what it can do and provides you with a template site structure. It’s designed for you to clone and edit as much as you like. See the different sections of the documentation and site for more ideas.

    This is another section

    This is another section

    diff --git a/cn/about/index.html b/cn/about/index.html index 10fc6d1eb..14bd0050c 100644 --- a/cn/about/index.html +++ b/cn/about/index.html @@ -16,7 +16,7 @@ "> -

    About Goldydocs

    A sample site using the Docsy Hugo theme.

    Goldydocs is a sample site using the Docsy Hugo theme that shows what it can do and provides you with a template site structure. It’s designed for you to clone and edit as much as you like. See the different sections of the documentation and site for more ideas.

    This is another section

    This is another section

    +

    About Goldydocs

    A sample site using the Docsy Hugo theme.

    Goldydocs is a sample site using the Docsy Hugo theme that shows what it can do and provides you with a template site structure. It’s designed for you to clone and edit as much as you like. See the different sections of the documentation and site for more ideas.

    This is another section

    This is another section

    diff --git a/cn/blog/2018/01/04/another-great-release/index.html b/cn/blog/2018/01/04/another-great-release/index.html index e9262f6bf..a8cc2202e 100644 --- a/cn/blog/2018/01/04/another-great-release/index.html +++ b/cn/blog/2018/01/04/another-great-release/index.html @@ -29,7 +29,7 @@ }
    Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
     

    Inline code inside table cells should still be distinguishable.

    LanguageCode
    Javascriptvar foo = "bar";
    Rubyfoo = "bar"{

    Small images should be shown at their actual size.

    Large images should always scale down and fit in the content container.

    Components

    Alerts

    Sizing

    Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Parameters available

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Using pixels

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Using rem

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Memory

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    RAM to use

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    More is better

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Used RAM

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    This is the final element on the page and there should be no margin below this.
    -
    + diff --git a/cn/blog/2018/10/06/easy-documentation-with-docsy/index.html b/cn/blog/2018/10/06/easy-documentation-with-docsy/index.html index 17654d647..2c68a65c4 100644 --- a/cn/blog/2018/10/06/easy-documentation-with-docsy/index.html +++ b/cn/blog/2018/10/06/easy-documentation-with-docsy/index.html @@ -28,7 +28,7 @@ -

    The image will be rendered at the size and byline specified in the front matter.

    +

    The image will be rendered at the size and byline specified in the front matter.

    diff --git a/cn/blog/2018/10/06/the-second-blog-post/index.html b/cn/blog/2018/10/06/the-second-blog-post/index.html index 79f53e223..293a533e9 100644 --- a/cn/blog/2018/10/06/the-second-blog-post/index.html +++ b/cn/blog/2018/10/06/the-second-blog-post/index.html @@ -29,7 +29,7 @@ }
    Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
     

    Inline code inside table cells should still be distinguishable.

    LanguageCode
    Javascriptvar foo = "bar";
    Rubyfoo = "bar"{

    Small images should be shown at their actual size.

    Large images should always scale down and fit in the content container.

    Components

    Alerts

    Sizing

    Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Parameters available

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Using pixels

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Using rem

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Memory

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    RAM to use

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    More is better

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Used RAM

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    This is the final element on the page and there should be no margin below this.
    -
    + diff --git a/cn/blog/_print/index.html b/cn/blog/_print/index.html index 2b7d5a98e..d2ced4fa3 100644 --- a/cn/blog/_print/index.html +++ b/cn/blog/_print/index.html @@ -68,7 +68,7 @@ }
    Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
     

    Inline code inside table cells should still be distinguishable.

    LanguageCode
    Javascriptvar foo = "bar";
    Rubyfoo = "bar"{

    Small images should be shown at their actual size.

    Large images should always scale down and fit in the content container.

    Components

    Alerts

    Sizing

    Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Parameters available

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Using pixels

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Using rem

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Memory

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    RAM to use

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    More is better

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Used RAM

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    This is the final element on the page and there should be no margin below this.
    -
    + diff --git a/cn/blog/index.html b/cn/blog/index.html index b7809911c..527ab9398 100644 --- a/cn/blog/index.html +++ b/cn/blog/index.html @@ -10,7 +10,7 @@ There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. …

    Read more

  • Announcing Docsy

    Saturday, October 06, 2018 in News

    Featured Image for Easy documentation with Docsy
    Photo: Riona MacNamara / CC-BY-CA

    This is a typical blog post that includes images. The front matter specifies the date of the blog post, its title, a short description that will be displayed on the blog landing page, and its author. Including images Here’s an image …

    Read more

  • Release New Features

    Thursday, January 04, 2018 in Releases

    Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over). -There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. …

    Read more

  • +There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. …

    Read more

    diff --git a/cn/blog/news/_print/index.html b/cn/blog/news/_print/index.html index 88a79f716..dcc82cd67 100644 --- a/cn/blog/news/_print/index.html +++ b/cn/blog/news/_print/index.html @@ -45,7 +45,7 @@ }
    Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
     

    Inline code inside table cells should still be distinguishable.

    LanguageCode
    Javascriptvar foo = "bar";
    Rubyfoo = "bar"{

    Small images should be shown at their actual size.

    Large images should always scale down and fit in the content container.

    Components

    Alerts

    Sizing

    Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Parameters available

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Using pixels

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Using rem

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Memory

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    RAM to use

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    More is better

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Used RAM

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    This is the final element on the page and there should be no margin below this.
    -
    + diff --git a/cn/blog/news/index.html b/cn/blog/news/index.html index a096c0cad..a32cabc1c 100644 --- a/cn/blog/news/index.html +++ b/cn/blog/news/index.html @@ -7,7 +7,7 @@ Print entire section
    RSS

    Posts in 2018

    • Second blog post

      Saturday, October 06, 2018 in News

      Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over). There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. …

      Read more

    • Announcing Docsy

      Saturday, October 06, 2018 in News

      Featured Image for Easy documentation with Docsy
      Photo: Riona MacNamara / CC-BY-CA

      This is a typical blog post that includes images. The front matter specifies the date of the blog post, its title, a short description that will be displayed on the blog landing page, and its author. -Including images Here’s an image …

      Read more

    +Including images Here’s an image …

    Read more

    diff --git a/cn/blog/releases/_print/index.html b/cn/blog/releases/_print/index.html index 2fa41a458..4f88c803e 100644 --- a/cn/blog/releases/_print/index.html +++ b/cn/blog/releases/_print/index.html @@ -21,7 +21,7 @@ }
    Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
     

    Inline code inside table cells should still be distinguishable.

    LanguageCode
    Javascriptvar foo = "bar";
    Rubyfoo = "bar"{

    Small images should be shown at their actual size.

    Large images should always scale down and fit in the content container.

    Components

    Alerts

    Sizing

    Add some sections here to see how the ToC looks like. Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Parameters available

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Using pixels

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Using rem

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Memory

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    RAM to use

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    More is better

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    Used RAM

    Bacon ipsum dolor sit amet t-bone doner shank drumstick, pork belly porchetta chuck sausage brisket ham hock rump pig. Chuck kielbasa leberkas, pork bresaola ham hock filet mignon cow shoulder short ribs biltong.

    This is the final element on the page and there should be no margin below this.
    -
    + diff --git a/cn/blog/releases/index.html b/cn/blog/releases/index.html index 5201f6e78..5d500db99 100644 --- a/cn/blog/releases/index.html +++ b/cn/blog/releases/index.html @@ -5,7 +5,7 @@ Create documentation issue Create project issue Print entire section
    RSS

    Posts in 2018

    • Release New Features

      Thursday, January 04, 2018 in Releases

      Text can be bold, italic, or strikethrough. Links should be blue with no underlines (unless hovered over). -There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. …

      Read more

    +There should be whitespace between paragraphs. There should be whitespace between paragraphs. There should be whitespace between paragraphs. …

    Read more

    diff --git a/cn/categories/index.html b/cn/categories/index.html index 71921f227..fc904af94 100644 --- a/cn/categories/index.html +++ b/cn/categories/index.html @@ -1,5 +1,5 @@ Categories | HugeGraph -

    Categories

    +

    Categories

    diff --git a/cn/community/_print/index.html b/cn/community/_print/index.html index d425c4902..0371c8e81 100644 --- a/cn/community/_print/index.html +++ b/cn/community/_print/index.html @@ -1,6 +1,6 @@ Community | HugeGraph -

    Join the HugeGraph community

    HugeGraph is an open source project that anyone in the community can use, improve, and enjoy. We'd love you to join us! Here's a few ways to find out what's happening and get involved.

    +

    Join the HugeGraph community

    HugeGraph is an open source project that anyone in the community can use, improve, and enjoy. We'd love you to join us! Here's a few ways to find out what's happening and get involved.

    diff --git a/cn/community/index.html b/cn/community/index.html index d8055bf60..ff66120b9 100644 --- a/cn/community/index.html +++ b/cn/community/index.html @@ -1,6 +1,6 @@ Community | HugeGraph -

    Join the HugeGraph community

    HugeGraph is an open source project that anyone in the community can use, improve, and enjoy. We'd love you to join us! Here's a few ways to find out what's happening and get involved.

    +

    Join the HugeGraph community

    HugeGraph is an open source project that anyone in the community can use, improve, and enjoy. We'd love you to join us! Here's a few ways to find out what's happening and get involved.

    diff --git a/cn/docs/_print/index.html b/cn/docs/_print/index.html index 80be3e3de..20690877d 100644 --- a/cn/docs/_print/index.html +++ b/cn/docs/_print/index.html @@ -6548,7 +6548,7 @@ g.V(pluto).out('brother').as('god').out('lives').as('place').select('god','place').by('name')

    推荐使用HugeGraph-Studio 通过可视化的方式来执行上述代码。另外也可以通过HugeGraph-Client、HugeApi、GremlinConsole和GremlinDriver等多种方式执行上述代码。

    3.2 总结

    HugeGraph 目前支持 Gremlin 的语法,用户可以通过 Gremlin / REST-API 实现各种查询需求。

    8 - PERFORMANCE

    8.1 - HugeGraph BenchMark Performance

    1 测试环境

    1.1 硬件信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD

    1.2 软件信息

    1.2.1 测试用例

    测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:

    1.2.2 测试数据集

    测试使用人造数据和真实数据

    本测试用到的数据集规模
    名称vertex数目edge数目文件大小
    email-enron.txt36,691367,6614MB
    com-youtube.ungraph.txt1,157,8062,987,62438.7MB
    amazon0601.txt403,3933,387,38847.9MB
    com-lj.ungraph.txt399796134681189479MB

    1.3 服务配置

    graphdb-benchmark适配的Titan版本为0.5.4

    2 测试结果

    2.1 Batch插入性能

    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)com-lj.ungraph(3000w)
    HugeGraph0.6295.7115.24367.033
    Titan10.15108.569150.2661217.944
    Neo4j3.88418.93824.890281.537

    说明

    结论

    2.2 遍历性能

    2.2.1 术语说明
    2.2.2 FN性能
    Backendemail-enron(3.6w)amazon0601(40w)com-youtube.ungraph(120w)com-lj.ungraph(400w)
    HugeGraph4.07245.11866.006609.083
    Titan8.08492.507184.5431099.371
    Neo4j2.42410.53711.609106.919

    说明

    2.2.3 FA性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)com-lj.ungraph(3000w)
    HugeGraph1.54010.76411.243151.271
    Titan7.36193.344169.2181085.235
    Neo4j1.6734.7754.28440.507

    说明

    结论

    2.3 HugeGraph-图常用分析方法性能

    术语说明
    FS性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)com-lj.ungraph(3000w)
    HugeGraph0.4940.1033.3648.155
    Titan11.8180.239377.709575.678
    Neo4j1.7191.8001.9568.530

    说明

    结论
    K-neighbor性能
    顶点深度一度二度三度四度五度六度
    v1时间0.031s0.033s0.048s0.500s11.27sOOM
    v111时间0.027s0.034s0.1151.36sOOM
    v1111时间0.039s0.027s0.052s0.511s10.96sOOM

    说明

    K-out性能
    顶点深度一度二度三度四度五度六度
    v1时间0.054s0.057s0.109s0.526s3.77sOOM
    10133245350,8301,128,688
    v111时间0.032s0.042s0.136s1.25s20.62sOOM
    1021149441131502,629,970
    v1111时间0.039s0.045s0.053s1.10s2.92sOOM
    101402555508251,070,230

    说明

    结论

    2.4 图综合性能测试-CW

    数据库规模1000规模5000规模10000规模20000
    HugeGraph(core)20.804242.099744.7801700.547
    Titan45.790820.6332652.2359568.623
    Neo4j5.91350.267142.354460.880

    说明

    结论

    8.2 - HugeGraph-API Performance

    HugeGraph API性能测试主要测试HugeGraph-Server对RESTful API请求的并发处理能力,包括:

    HugeGraph的每个发布版本的RESTful API的性能测试情况可以参考:

    之前的版本只提供HugeGraph所支持的后端种类中性能最好的API性能测试,从0.5.6版本开始,分别提供了单机和集群的性能情况

    8.2.1 - v0.5.6 Stand-alone(RocksDB)

    1 测试环境

    被压机器信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    2.2 底层存储

    后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph单条插入顶点和边的速度在每秒1w左右
    2. 顶点和边的批量插入速度远大于单条插入速度
    3. 按id查询顶点和边的并发度可达到13000以上,且请求的平均延时小于50ms

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点的最大插入速度:
    image

    ####### 结论:

    边的最大插入速度
    image

    ####### 结论:

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    顶点的单条插入
    image

    ####### 结论:

    边的单条插入
    image

    ####### 结论:

    4.3 按id查询

    4.3.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    顶点的按id查询
    image

    ####### 结论:

    边的按id查询
    image

    ####### 结论:

    8.2.2 - v0.5.6 Cluster(Cassandra)

    1 测试环境

    被压机器信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    2.2 底层存储

    后端存储使用15节点Cassandra集群,HugeGraph与Cassandra集群位于不同的服务器,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph单条插入顶点和边的速度分别为9000和4500
    2. 顶点和边的批量插入速度分别为5w/s和15w/s,远大于单条插入速度
    3. 按id查询顶点和边的并发度可达到12000以上,且请求的平均延时小于70ms

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点的最大插入速度:
    image

    ####### 结论:

    边的最大插入速度
    image

    ####### 结论:

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    顶点的单条插入
    image

    ####### 结论:

    边的单条插入
    image

    ####### 结论:

    4.3 按id查询

    4.3.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    顶点的按id查询
    image

    ####### 结论:

    边的按id查询
    image

    ####### 结论:

    8.2.3 - v0.4.4

    1 测试环境

    被压机器信息

    机器编号CPUMemory网卡磁盘
    124 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz61G1000Mbps1.4T HDD
    248 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    2.2 底层存储

    后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph每秒能够处理的请求数目上限是7000
    2. 批量插入速度远大于单条插入,在服务器上测试结果达到22w edges/s,37w vertices/s
    3. 后端是RocksDB,增大CPU数目和内存大小可以增大批量插入的性能。CPU和内存扩大一倍,性能增加45%-60%
    4. 批量插入场景,使用SSD替代HDD,性能提升较小,只有3%-5%

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点和边的最大插入速度(高性能服务器,使用SSD存储RocksDB数据):
    image
    结论:

    1. CPU和内存对插入性能的影响(服务器都使用HDD存储RocksDB数据,批量插入)

    image
    结论:

    2. SSD和HDD对插入性能的影响(高性能服务器,批量插入)

    image
    结论:

    3. 不同并发线程数对插入性能的影响(普通服务器,使用HDD存储RocksDB数据)

    image
    结论:

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    image
    结论:

    8.2.4 - v0.2

    1 测试环境

    1.1 软硬件信息

    起压和被压机器配置相同,基本参数如下:

    CPUMemory网卡
    24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz61G1000Mbps

    测试工具:apache-Jmeter-2.5.1

    1.2 服务配置

      batch_size_warn_threshold_in_kb: 1000
       batch_size_fail_threshold_in_kb: 1000
    -

    1.3 名词解释

    注:时间的单位均为ms

    2 测试结果

    2.1 schema

    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    property_keys33100011201720.00%920.7/sec178.1
    vertex_labels33100012211260.00%920.7/sec193.4
    edge_labels33100022311580.00%920.7/sec242.8

    结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力

    2.2 single 插入

    2.2.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    single_insert_vertices3310000110210.00%920.7/sec234.4
    single_insert_edges3310002231530.00%920.7/sec309.1
    结论
    2.2.2 压力上限测试

    测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    性能指标
    ConcurrencySamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    2000(vertex)661916111030120.00%1842.9/sec469.1
    4000(vertex)131612413114090230.00%3673.1/sec935.0
    5000(vertex)1468121101011351227092230.06%4095.6/sec1046.0
    7000(vertex)1378454161717081886093610.08%3860.3/sec987.1
    2000(edge)62939995310431113190010.00%1750.3/sec587.6
    3000(edge)648364225824042500290010.00%1810.7/sec607.9
    4000(edge)649904199221122211190010.06%1812.5/sec608.5
    结论

    2.3 batch 插入

    2.3.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    batch_insert_vertices371628959959597041798520.00%103.4/sec393.3
    batch_insert_edges10800318493454435132435357470.00%28.8/sec814.9
    结论

    8.3 - HugeGraph-Loader Performance

    使用场景

    当要批量插入的图数据(包括顶点和边)条数为billion级别及以下,或者总数据量小于TB时,可以采用HugeGraph-Loader工具持续、高速导入图数据

    性能

    测试均采用网址数据的边数据

    RocksDB单机性能

    Cassandra集群性能

    8.4 -

    1 测试环境

    1.1 硬件信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD

    1.2 软件信息

    1.2.1 测试用例

    测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:

    1.2.2 测试数据集

    测试使用人造数据和真实数据

    本测试用到的数据集规模
    名称vertex数目edge数目文件大小
    email-enron.txt36,691367,6614MB
    com-youtube.ungraph.txt1,157,8062,987,62438.7MB
    amazon0601.txt403,3933,387,38847.9MB

    1.3 服务配置

    graphdb-benchmark适配的Titan版本为0.5.4

    2 测试结果

    2.1 Batch插入性能

    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan9.51688.123111.586
    RocksDB2.34514.07616.636
    Cassandra11.930108.709101.959
    Memory3.07715.20413.841

    说明

    结论

    2.2 遍历性能

    2.2.1 术语说明
    2.2.2 FN性能
    Backendemail-enron(3.6w)amazon0601(40w)com-youtube.ungraph(120w)
    Titan7.72470.935128.884
    RocksDB8.87665.85263.388
    Cassandra13.125126.959102.580
    Memory22.309207.411165.609

    说明

    2.2.3 FA性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan7.11963.353115.633
    RocksDB6.03264.52652.721
    Cassandra9.410102.76694.197
    Memory12.340195.444140.89

    说明

    结论

    2.3 HugeGraph-图常用分析方法性能

    术语说明
    FS性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan11.3330.313376.06
    RocksDB44.3912.221268.792
    Cassandra39.8453.337331.113
    Memory35.6382.059388.987

    说明

    结论
    K-neighbor性能
    顶点深度一度二度三度四度五度六度
    v1时间0.031s0.033s0.048s0.500s11.27sOOM
    v111时间0.027s0.034s0.1151.36sOOM
    v1111时间0.039s0.027s0.052s0.511s10.96sOOM

    说明

    K-out性能
    顶点深度一度二度三度四度五度六度
    v1时间0.054s0.057s0.109s0.526s3.77sOOM
    10133245350,8301,128,688
    v111时间0.032s0.042s0.136s1.25s20.62sOOM
    1021149441131502,629,970
    v1111时间0.039s0.045s0.053s1.10s2.92sOOM
    101402555508251,070,230

    说明

    结论

    2.4 图综合性能测试-CW

    数据库规模1000规模5000规模10000规模20000
    Titan45.943849.1682737.1179791.46
    Memory(core)41.0771825.905**
    Cassandra(core)39.783862.7442423.1366564.191
    RocksDB(core)33.383199.894763.8691677.813

    说明

    结论

    9 - CHANGELOGS

    9.1 - HugeGraph 0.12 Release Notes

    API & Client

    接口更新

    其它修改

    Core & Server

    功能更新

    BUG修复

    配置项修改:

    其它修改

    Loader

    Tools

    9.2 - HugeGraph 0.11 Release Notes

    API & Client

    功能更新

    内部修改

    Core

    功能更新

    BUG修复

    内部修改

    其它

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复

    内部修改

    9.3 - HugeGraph 0.10 Release Notes

    API & Client

    功能更新

    内部修改

    Core

    功能更新

    BUG修复

    内部修改

    其它

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复

    内部修改

    9.4 - HugeGraph 0.9 Release Notes

    API & Client

    功能更新

    BUG修复

    内部修改

    Core

    功能更新

    BUG修复

    内部修改

    其它

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复

    9.5 - HugeGraph 0.8 Release Notes

    API & Client

    功能更新

    BUG修复

    内部修改

    Core

    功能更新

    BUG修复

    内部修改

    其它

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复

    9.6 - HugeGraph 0.7 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    内部修改

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复

    Studio

    BUG修复

    9.7 - HugeGraph 0.6 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    测试

    内部修改

    Tools

    功能更新

    BUG修复

    Loader

    功能更新

    BUG修复

    9.8 - HugeGraph 0.5 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    测试

    内部修改

    9.9 - HugeGraph 0.4.4 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    测试

    内部修改

    9.10 - HugeGraph 0.3.3 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    测试

    内部修改

    9.11 - HugeGraph 0.2 Release Notes

    API & Java Client

    功能更新

    0.2版实现了图数据库基本功能,提供如下功能:

    元数据(Schema)

    顶点类型(Vertex Label)

    边类型(Edge Label)

    属性(Property Key)

    索引(Index Label)

    元数据检查

    图数据

    顶点(Vertex)

    边(Edge)

    顶点/边属性

    事务

    索引

    索引类型

    索引操作

    查询/遍历

    缓存

    可缓存内容

    缓存特性

    接口(RESTful API)

    更多细节详见API文档

    后端支持

    支持Cassandra后端

    支持Memory后端(仅用于测试)

    其它

    支持配置项

    支持多图实例

    版本检查

    9.12 - HugeGraph 0.2.4 Release Notes

    API & Java Client

    功能更新

    元数据(Schema)相关

    BUG修复

    图数据(Vertex、Edge)相关

    功能更新

    BUG修复

    查询、索引、缓存相关

    功能更新

    BUG修复

    其它

    功能更新

    BUG修复

    测试

    Tinkerpop合规测试

    单元测试

    内部修改

    10 -

    Contributor Agreement

    Individual Contributor exclusive License Agreement

    (including the TRADITIONAL PATENT LICENSE OPTION)

    Thank you for your interest in contributing to HugeGraph’s all projects (“We” or “Us”).

    The purpose of this contributor agreement (“Agreement”) is to clarify and document the rights granted by contributors to Us. To make this document effective, please follow the comment of GitHub CLA-Assistant when submitting a new pull request.

    How to use this Contributor Agreement

    If You are an employee and have created the Contribution as part of your employment, You need to have Your employer approve this Agreement or sign the Entity version of this document. If You do not own the Copyright in the entire work of authorship, any other author of the Contribution should also sign this – in any event, please contact Us at hugegraph@googlegroups.com

    1. Definitions

    “You” means the individual Copyright owner who Submits a Contribution to Us.

    “Contribution” means any original work of authorship, including any original modifications or additions to an existing work of authorship, Submitted by You to Us, in which You own the Copyright.

    “Copyright” means all rights protecting works of authorship, including copyright, moral and neighboring rights, as appropriate, for the full term of their existence.

    “Material” means the software or documentation made available by Us to third parties. When this Agreement covers more than one software project, the Material means the software or documentation to which the Contribution was Submitted. After You Submit the Contribution, it may be included in the Material.

    “Submit” means any act by which a Contribution is transferred to Us by You by means of tangible or intangible media, including but not limited to electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, Us, but excluding any transfer that is conspicuously marked or otherwise designated in writing by You as “Not a Contribution.”

    “Documentation” means any non-software portion of a Contribution.

    2. License grant

    Subject to the terms and conditions of this Agreement, You hereby grant to Us a worldwide, royalty-free, Exclusive, perpetual and irrevocable (except as stated in Section 8.2) license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, under the Copyright covering the Contribution to use the Contribution by all means, including, but not limited to:

    2.2 Moral rights

    Moral Rights remain unaffected to the extent they are recognized and not waivable by applicable law. Notwithstanding, You may add your name to the attribution mechanism customary used in the Materials you Contribute to, such as the header of the source code files of Your Contribution, and We will respect this attribution when using Your Contribution.

    Upon such grant of rights to Us, We immediately grant to You a worldwide, royalty-free, non-exclusive, perpetual and irrevocable license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, under the Copyright covering the Contribution to use the Contribution by all means, including, but not limited to:

    This license back is limited to the Contribution and does not provide any rights to the Material.

    3. Patents

    3.1 Patent license

    Subject to the terms and conditions of this Agreement You hereby grant to Us and to recipients of Materials distributed by Us a worldwide, royalty-free, non-exclusive, perpetual and irrevocable (except as stated in Section 3.2) patent license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, to make, have made, use, sell, offer for sale, import and otherwise transfer the Contribution and the Contribution in combination with any Material (and portions of such combination). This license applies to all patents owned or controlled by You, whether already acquired or hereafter acquired, that would be infringed by making, having made, using, selling, offering for sale, importing or otherwise transferring of Your Contribution(s) alone or by combination of Your Contribution(s) with any Material.

    3.2 Revocation of patent license

    You reserve the right to revoke the patent license stated in section 3.1 if We make any infringement claim that is targeted at your Contribution and not asserted for a Defensive Purpose. An assertion of claims of the Patents shall be considered for a “Defensive Purpose” if the claims are asserted against an entity that has filed, maintained, threatened, or voluntarily participated in a patent infringement lawsuit against Us or any of Our licensees.

    4. License obligations by Us

    We agree to (sub)license the Contribution or any Materials containing, based on or derived from your Contribution under the terms of any licenses the Free Software Foundation classifies as Free Software License and which are approved by the Open Source Initiative as Open Source licenses.

    More specifically and in strict accordance with the above paragraph, we agree to (sub)license the Contribution or any Materials containing, based on or derived from the Contribution only in accordance with our licensing policy available at: http://www.apache.org/licenses/LICENSE-2.0.

    In addition, We may use the following licenses for Documentation in the Contribution: GFDL-1.2 (including any right to adopt any future version of a license).

    We agree to license patents owned or controlled by You only to the extent necessary to (sub)license Your Contribution(s) and the combination of Your Contribution(s) with the Material under the terms of any licenses the Free Software Foundation classifies as Free Software licenses and which are approved by the Open Source Initiative as Open Source licenses..

    5. Disclaimer

    THE CONTRIBUTION IS PROVIDED “AS IS”. MORE PARTICULARLY, ALL EXPRESS OR IMPLIED WARRANTIES INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTY OF SATISFACTORY QUALITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT ARE EXPRESSLY DISCLAIMED BY YOU TO US AND BY US TO YOU. TO THE EXTENT THAT ANY SUCH WARRANTIES CANNOT BE DISCLAIMED, SUCH WARRANTY IS LIMITED IN DURATION AND EXTENT TO THE MINIMUM PERIOD AND EXTENT PERMITTED BY LAW.

    6. Consequential damage waiver

    TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT WILL YOU OR WE BE LIABLE FOR ANY LOSS OF PROFITS, LOSS OF ANTICIPATED SAVINGS, LOSS OF DATA, INDIRECT, SPECIAL, INCIDENTAL, CONSEQUENTIAL AND EXEMPLARY DAMAGES ARISING OUT OF THIS AGREEMENT REGARDLESS OF THE LEGAL OR EQUITABLE THEORY (CONTRACT, TORT OR OTHERWISE) UPON WHICH THE CLAIM IS BASED.

    7. Approximation of disclaimer and damage waiver

    IF THE DISCLAIMER AND DAMAGE WAIVER MENTIONED IN SECTION 5. AND SECTION 6. CANNOT BE GIVEN LEGAL EFFECT UNDER APPLICABLE LOCAL LAW, REVIEWING COURTS SHALL APPLY LOCAL LAW THAT MOST CLOSELY APPROXIMATES AN ABSOLUTE WAIVER OF ALL CIVIL OR CONTRACTUAL LIABILITY IN CONNECTION WITH THE CONTRIBUTION.

    8. Term

    8.1 This Agreement shall come into effect upon Your acceptance of the terms and conditions.

    8.2 This Agreement shall apply for the term of the copyright and patents licensed here. However, You shall have the right to terminate the Agreement if We do not fulfill the obligations as set forth in Section 4. Such termination must be made in writing.

    8.3 In the event of a termination of this Agreement Sections 5, 6, 7, 8 and 9 shall survive such termination and shall remain in full force thereafter. For the avoidance of doubt, Free and Open Source Software (sub)licenses that have already been granted for Contributions at the date of the termination shall remain in full force after the termination of this Agreement.

    9 Miscellaneous

    9.1 This Agreement and all disputes, claims, actions, suits or other proceedings arising out of this agreement or relating in any way to it shall be governed by the laws of China excluding its private international law provisions.

    9.2 This Agreement sets out the entire agreement between You and Us for Your Contributions to Us and overrides all other agreements or understandings.

    9.3 In case of Your death, this agreement shall continue with Your heirs. In case of more than one heir, all heirs must exercise their rights through a commonly authorized person.

    9.4 If any provision of this Agreement is found void and unenforceable, such provision will be replaced to the extent possible with a provision that comes closest to the meaning of the original provision and that is enforceable. The terms and conditions set forth in this Agreement shall apply notwithstanding any failure of essential purpose of this Agreement or any limited remedy to the maximum extent possible under law.

    9.5 You agree to notify Us of any facts or circumstances of which you become aware that would make this Agreement inaccurate in any respect.

    11 -

    HugeGraph Docs

    Quickstart

    Config

    API

    Guides

    Query Language

    Performance

    ChangeLogs

    +

    1.3 名词解释

    注:时间的单位均为ms

    2 测试结果

    2.1 schema

    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    property_keys33100011201720.00%920.7/sec178.1
    vertex_labels33100012211260.00%920.7/sec193.4
    edge_labels33100022311580.00%920.7/sec242.8

    结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力

    2.2 single 插入

    2.2.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    single_insert_vertices3310000110210.00%920.7/sec234.4
    single_insert_edges3310002231530.00%920.7/sec309.1
    结论
    2.2.2 压力上限测试

    测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    性能指标
    ConcurrencySamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    2000(vertex)661916111030120.00%1842.9/sec469.1
    4000(vertex)131612413114090230.00%3673.1/sec935.0
    5000(vertex)1468121101011351227092230.06%4095.6/sec1046.0
    7000(vertex)1378454161717081886093610.08%3860.3/sec987.1
    2000(edge)62939995310431113190010.00%1750.3/sec587.6
    3000(edge)648364225824042500290010.00%1810.7/sec607.9
    4000(edge)649904199221122211190010.06%1812.5/sec608.5
    结论

    2.3 batch 插入

    2.3.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    batch_insert_vertices371628959959597041798520.00%103.4/sec393.3
    batch_insert_edges10800318493454435132435357470.00%28.8/sec814.9
    结论

    8.3 - HugeGraph-Loader Performance

    使用场景

    当要批量插入的图数据(包括顶点和边)条数为billion级别及以下,或者总数据量小于TB时,可以采用HugeGraph-Loader工具持续、高速导入图数据

    性能

    测试均采用网址数据的边数据

    RocksDB单机性能

    Cassandra集群性能

    8.4 -

    1 测试环境

    1.1 硬件信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD

    1.2 软件信息

    1.2.1 测试用例

    测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:

    1.2.2 测试数据集

    测试使用人造数据和真实数据

    本测试用到的数据集规模
    名称vertex数目edge数目文件大小
    email-enron.txt36,691367,6614MB
    com-youtube.ungraph.txt1,157,8062,987,62438.7MB
    amazon0601.txt403,3933,387,38847.9MB

    1.3 服务配置

    graphdb-benchmark适配的Titan版本为0.5.4

    2 测试结果

    2.1 Batch插入性能

    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan9.51688.123111.586
    RocksDB2.34514.07616.636
    Cassandra11.930108.709101.959
    Memory3.07715.20413.841

    说明

    结论

    2.2 遍历性能

    2.2.1 术语说明
    2.2.2 FN性能
    Backendemail-enron(3.6w)amazon0601(40w)com-youtube.ungraph(120w)
    Titan7.72470.935128.884
    RocksDB8.87665.85263.388
    Cassandra13.125126.959102.580
    Memory22.309207.411165.609

    说明

    2.2.3 FA性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan7.11963.353115.633
    RocksDB6.03264.52652.721
    Cassandra9.410102.76694.197
    Memory12.340195.444140.89

    说明

    结论

    2.3 HugeGraph-图常用分析方法性能

    术语说明
    FS性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan11.3330.313376.06
    RocksDB44.3912.221268.792
    Cassandra39.8453.337331.113
    Memory35.6382.059388.987

    说明

    结论
    K-neighbor性能
    顶点深度一度二度三度四度五度六度
    v1时间0.031s0.033s0.048s0.500s11.27sOOM
    v111时间0.027s0.034s0.1151.36sOOM
    v1111时间0.039s0.027s0.052s0.511s10.96sOOM

    说明

    K-out性能
    顶点深度一度二度三度四度五度六度
    v1时间0.054s0.057s0.109s0.526s3.77sOOM
    10133245350,8301,128,688
    v111时间0.032s0.042s0.136s1.25s20.62sOOM
    1021149441131502,629,970
    v1111时间0.039s0.045s0.053s1.10s2.92sOOM
    101402555508251,070,230

    说明

    结论

    2.4 图综合性能测试-CW

    数据库规模1000规模5000规模10000规模20000
    Titan45.943849.1682737.1179791.46
    Memory(core)41.0771825.905**
    Cassandra(core)39.783862.7442423.1366564.191
    RocksDB(core)33.383199.894763.8691677.813

    说明

    结论

    9 - CHANGELOGS

    9.1 - HugeGraph 0.12 Release Notes

    API & Client

    接口更新

    其它修改

    Core & Server

    功能更新

    BUG修复

    配置项修改:

    其它修改

    Loader

    Tools

    9.2 - HugeGraph 0.11 Release Notes

    API & Client

    功能更新

    内部修改

    Core

    功能更新

    BUG修复

    内部修改

    其它

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复

    内部修改

    9.3 - HugeGraph 0.10 Release Notes

    API & Client

    功能更新

    内部修改

    Core

    功能更新

    BUG修复

    内部修改

    其它

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复

    内部修改

    9.4 - HugeGraph 0.9 Release Notes

    API & Client

    功能更新

    BUG修复

    内部修改

    Core

    功能更新

    BUG修复

    内部修改

    其它

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复

    9.5 - HugeGraph 0.8 Release Notes

    API & Client

    功能更新

    BUG修复

    内部修改

    Core

    功能更新

    BUG修复

    内部修改

    其它

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复

    9.6 - HugeGraph 0.7 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    内部修改

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复

    Studio

    BUG修复

    9.7 - HugeGraph 0.6 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    测试

    内部修改

    Tools

    功能更新

    BUG修复

    Loader

    功能更新

    BUG修复

    9.8 - HugeGraph 0.5 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    测试

    内部修改

    9.9 - HugeGraph 0.4.4 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    测试

    内部修改

    9.10 - HugeGraph 0.3.3 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    测试

    内部修改

    9.11 - HugeGraph 0.2 Release Notes

    API & Java Client

    功能更新

    0.2版实现了图数据库基本功能,提供如下功能:

    元数据(Schema)

    顶点类型(Vertex Label)

    边类型(Edge Label)

    属性(Property Key)

    索引(Index Label)

    元数据检查

    图数据

    顶点(Vertex)

    边(Edge)

    顶点/边属性

    事务

    索引

    索引类型

    索引操作

    查询/遍历

    缓存

    可缓存内容

    缓存特性

    接口(RESTful API)

    更多细节详见API文档

    后端支持

    支持Cassandra后端

    支持Memory后端(仅用于测试)

    其它

    支持配置项

    支持多图实例

    版本检查

    9.12 - HugeGraph 0.2.4 Release Notes

    API & Java Client

    功能更新

    元数据(Schema)相关

    BUG修复

    图数据(Vertex、Edge)相关

    功能更新

    BUG修复

    查询、索引、缓存相关

    功能更新

    BUG修复

    其它

    功能更新

    BUG修复

    测试

    Tinkerpop合规测试

    单元测试

    内部修改

    10 -

    Contributor Agreement

    Individual Contributor exclusive License Agreement

    (including the TRADITIONAL PATENT LICENSE OPTION)

    Thank you for your interest in contributing to HugeGraph’s all projects (“We” or “Us”).

    The purpose of this contributor agreement (“Agreement”) is to clarify and document the rights granted by contributors to Us. To make this document effective, please follow the comment of GitHub CLA-Assistant when submitting a new pull request.

    How to use this Contributor Agreement

    If You are an employee and have created the Contribution as part of your employment, You need to have Your employer approve this Agreement or sign the Entity version of this document. If You do not own the Copyright in the entire work of authorship, any other author of the Contribution should also sign this – in any event, please contact Us at hugegraph@googlegroups.com

    1. Definitions

    “You” means the individual Copyright owner who Submits a Contribution to Us.

    “Contribution” means any original work of authorship, including any original modifications or additions to an existing work of authorship, Submitted by You to Us, in which You own the Copyright.

    “Copyright” means all rights protecting works of authorship, including copyright, moral and neighboring rights, as appropriate, for the full term of their existence.

    “Material” means the software or documentation made available by Us to third parties. When this Agreement covers more than one software project, the Material means the software or documentation to which the Contribution was Submitted. After You Submit the Contribution, it may be included in the Material.

    “Submit” means any act by which a Contribution is transferred to Us by You by means of tangible or intangible media, including but not limited to electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, Us, but excluding any transfer that is conspicuously marked or otherwise designated in writing by You as “Not a Contribution.”

    “Documentation” means any non-software portion of a Contribution.

    2. License grant

    Subject to the terms and conditions of this Agreement, You hereby grant to Us a worldwide, royalty-free, Exclusive, perpetual and irrevocable (except as stated in Section 8.2) license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, under the Copyright covering the Contribution to use the Contribution by all means, including, but not limited to:

    2.2 Moral rights

    Moral Rights remain unaffected to the extent they are recognized and not waivable by applicable law. Notwithstanding, You may add your name to the attribution mechanism customary used in the Materials you Contribute to, such as the header of the source code files of Your Contribution, and We will respect this attribution when using Your Contribution.

    Upon such grant of rights to Us, We immediately grant to You a worldwide, royalty-free, non-exclusive, perpetual and irrevocable license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, under the Copyright covering the Contribution to use the Contribution by all means, including, but not limited to:

    This license back is limited to the Contribution and does not provide any rights to the Material.

    3. Patents

    3.1 Patent license

    Subject to the terms and conditions of this Agreement You hereby grant to Us and to recipients of Materials distributed by Us a worldwide, royalty-free, non-exclusive, perpetual and irrevocable (except as stated in Section 3.2) patent license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, to make, have made, use, sell, offer for sale, import and otherwise transfer the Contribution and the Contribution in combination with any Material (and portions of such combination). This license applies to all patents owned or controlled by You, whether already acquired or hereafter acquired, that would be infringed by making, having made, using, selling, offering for sale, importing or otherwise transferring of Your Contribution(s) alone or by combination of Your Contribution(s) with any Material.

    3.2 Revocation of patent license

    You reserve the right to revoke the patent license stated in section 3.1 if We make any infringement claim that is targeted at your Contribution and not asserted for a Defensive Purpose. An assertion of claims of the Patents shall be considered for a “Defensive Purpose” if the claims are asserted against an entity that has filed, maintained, threatened, or voluntarily participated in a patent infringement lawsuit against Us or any of Our licensees.

    4. License obligations by Us

    We agree to (sub)license the Contribution or any Materials containing, based on or derived from your Contribution under the terms of any licenses the Free Software Foundation classifies as Free Software License and which are approved by the Open Source Initiative as Open Source licenses.

    More specifically and in strict accordance with the above paragraph, we agree to (sub)license the Contribution or any Materials containing, based on or derived from the Contribution only in accordance with our licensing policy available at: http://www.apache.org/licenses/LICENSE-2.0.

    In addition, We may use the following licenses for Documentation in the Contribution: GFDL-1.2 (including any right to adopt any future version of a license).

    We agree to license patents owned or controlled by You only to the extent necessary to (sub)license Your Contribution(s) and the combination of Your Contribution(s) with the Material under the terms of any licenses the Free Software Foundation classifies as Free Software licenses and which are approved by the Open Source Initiative as Open Source licenses..

    5. Disclaimer

    THE CONTRIBUTION IS PROVIDED “AS IS”. MORE PARTICULARLY, ALL EXPRESS OR IMPLIED WARRANTIES INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTY OF SATISFACTORY QUALITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT ARE EXPRESSLY DISCLAIMED BY YOU TO US AND BY US TO YOU. TO THE EXTENT THAT ANY SUCH WARRANTIES CANNOT BE DISCLAIMED, SUCH WARRANTY IS LIMITED IN DURATION AND EXTENT TO THE MINIMUM PERIOD AND EXTENT PERMITTED BY LAW.

    6. Consequential damage waiver

    TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT WILL YOU OR WE BE LIABLE FOR ANY LOSS OF PROFITS, LOSS OF ANTICIPATED SAVINGS, LOSS OF DATA, INDIRECT, SPECIAL, INCIDENTAL, CONSEQUENTIAL AND EXEMPLARY DAMAGES ARISING OUT OF THIS AGREEMENT REGARDLESS OF THE LEGAL OR EQUITABLE THEORY (CONTRACT, TORT OR OTHERWISE) UPON WHICH THE CLAIM IS BASED.

    7. Approximation of disclaimer and damage waiver

    IF THE DISCLAIMER AND DAMAGE WAIVER MENTIONED IN SECTION 5. AND SECTION 6. CANNOT BE GIVEN LEGAL EFFECT UNDER APPLICABLE LOCAL LAW, REVIEWING COURTS SHALL APPLY LOCAL LAW THAT MOST CLOSELY APPROXIMATES AN ABSOLUTE WAIVER OF ALL CIVIL OR CONTRACTUAL LIABILITY IN CONNECTION WITH THE CONTRIBUTION.

    8. Term

    8.1 This Agreement shall come into effect upon Your acceptance of the terms and conditions.

    8.2 This Agreement shall apply for the term of the copyright and patents licensed here. However, You shall have the right to terminate the Agreement if We do not fulfill the obligations as set forth in Section 4. Such termination must be made in writing.

    8.3 In the event of a termination of this Agreement Sections 5, 6, 7, 8 and 9 shall survive such termination and shall remain in full force thereafter. For the avoidance of doubt, Free and Open Source Software (sub)licenses that have already been granted for Contributions at the date of the termination shall remain in full force after the termination of this Agreement.

    9 Miscellaneous

    9.1 This Agreement and all disputes, claims, actions, suits or other proceedings arising out of this agreement or relating in any way to it shall be governed by the laws of China excluding its private international law provisions.

    9.2 This Agreement sets out the entire agreement between You and Us for Your Contributions to Us and overrides all other agreements or understandings.

    9.3 In case of Your death, this agreement shall continue with Your heirs. In case of more than one heir, all heirs must exercise their rights through a commonly authorized person.

    9.4 If any provision of this Agreement is found void and unenforceable, such provision will be replaced to the extent possible with a provision that comes closest to the meaning of the original provision and that is enforceable. The terms and conditions set forth in this Agreement shall apply notwithstanding any failure of essential purpose of this Agreement or any limited remedy to the maximum extent possible under law.

    9.5 You agree to notify Us of any facts or circumstances of which you become aware that would make this Agreement inaccurate in any respect.

    11 -

    HugeGraph Docs

    Quickstart

    Config

    API

    Guides

    Query Language

    Performance

    ChangeLogs

    diff --git a/cn/docs/changelog/_print/index.html b/cn/docs/changelog/_print/index.html index ffa067583..a85e27824 100644 --- a/cn/docs/changelog/_print/index.html +++ b/cn/docs/changelog/_print/index.html @@ -1,6 +1,6 @@ CHANGELOGS | HugeGraph

    1 - HugeGraph 0.12 Release Notes

    API & Client

    接口更新

    • 支持 https + auth 模式连接图服务 (hugegraph-client #109 #110)
    • 统一 kout/kneighbor 等 OLTP 接口的参数命名及默认值(hugegraph-client #122 #123)
    • 支持 RESTful 接口利用 P.textcontains() 进行属性全文检索(hugegraph #1312)
    • 增加 graph_read_mode API 接口,以切换 OLTP、OLAP 读模式(hugegraph #1332)
    • 支持 list/set 类型的聚合属性 aggregate property(hugegraph #1332)
    • 权限接口增加 METRICS 资源类型(hugegraph #1355、hugegraph-client #114)
    • 权限接口增加 SCHEMA 资源类型(hugegraph #1362、hugegraph-client #117)
    • 增加手动 compact API 接口,支持 rocksdb/cassandra/hbase 后端(hugegraph #1378)
    • 权限接口增加 login/logout API,支持颁发或回收 Token(hugegraph #1500、hugegraph-client #125)
    • 权限接口增加 project API(hugegraph #1504、hugegraph-client #127)
    • 增加 OLAP 回写接口,支持 cassandra/rocksdb 后端(hugegraph #1506、hugegraph-client #129)
    • 增加返回一个图的所有 Schema 的 API 接口(hugegraph #1567、hugegraph-client #134)
    • 变更 property key 创建与更新 API 的 HTTP 返回码为 202(hugegraph #1584)
    • 增强 Text.contains() 支持3种格式:“word”、"(word)"、"(word1|word2|word3)"(hugegraph #1652)
    • 统一了属性中特殊字符的行为(hugegraph #1670 #1684)
    • 支持动态创建图实例、克隆图实例、删除图实例(hugegraph-client #135)

    其它修改

    • 修复在恢复 index label 时 IndexLabelV56 id 丢失的问题(hugegraph-client #118)
    • 为 Edge 类增加 name() 方法(hugegraph-client #121)

    Core & Server

    功能更新

    • 支持动态创建图实例(hugegraph #1065)
    • 支持通过 Gremlin 调用 OLTP 算法(hugegraph #1289)
    • 支持多集群使用同一个图权限服务,以共享权限信息(hugegraph #1350)
    • 支持跨多节点的 Cache 缓存同步(hugegraph #1357)
    • 支持 OLTP 算法使用原生集合以降低 GC 压力提升性能(hugegraph #1409)
    • 支持对新增的 Raft 节点打快照或恢复快照(hugegraph #1439)
    • 支持对集合属性建立二级索引 Secondary Index(hugegraph #1474)
    • 支持审计日志,及其压缩、限速等功能(hugegraph #1492 #1493)
    • 支持 OLTP 算法使用高性能并行无锁原生集合以提升性能(hugegraph #1552)

    BUG修复

    • 修复带权最短路径算法(weighted shortest path)NPE问题 (hugegraph #1250)
    • 增加 Raft 相关的安全操作白名单(hugegraph #1257)
    • 修复 RocksDB 实例未正确关闭的问题(hugegraph #1264)
    • 在清空数据 truncate 操作之后,显示的发起写快照 Raft Snapshot(hugegraph #1275)
    • 修复 Raft Leader 在收到 Follower 转发请求时未更新缓存的问题(hugegraph #1279)
    • 修复带权最短路径算法(weighted shortest path)结果不稳定的问题(hugegraph #1280)
    • 修复 rays 算法 limit 参数不生效问题(hugegraph #1284)
    • 修复 neighborrank 算法 capacity 参数未检查的问题(hugegraph #1290)
    • 修复 PostgreSQL 因为不存在与用户同名的数据库而初始化失败的问题(hugegraph #1293)
    • 修复 HBase 后端当启用 Kerberos 时初始化失败的问题(hugegraph #1294)
    • 修复 HBase/RocksDB 后端 shard 结束判断错误问题(hugegraph #1306)
    • 修复带权最短路径算法(weighted shortest path)未检查目标顶点存在的问题(hugegraph #1307)
    • 修复 personalrank/neighborrank 算法中非 String 类型 id 的问题(hugegraph #1310)
    • 检查必须是 master 节点才允许调度 gremlin job(hugegraph #1314)
    • 修复 g.V().hasLabel().limit(n) 因为索引覆盖导致的部分结果不准确问题(hugegraph #1316)
    • 修复 jaccardsimilarity 算法当并集为空时报 NaN 错误的问题(hugegraph #1324)
    • 修复 Raft Follower 节点操作 Schema 多节点之间数据不同步问题(hugegraph #1325)
    • 修复因为 tx 未关闭导致的 TTL 不生效问题(hugegraph #1330)
    • 修复 gremlin job 的执行结果大于 Cassandra 限制但小于任务限制时的异常处理(hugegraph #1334)
    • 检查权限接口 auth-delete 和 role-get API 操作时图必须存在(hugegraph #1338)
    • 修复异步任务结果中包含 path/tree 时系列化不正常的问题(hugegraph #1351)
    • 修复初始化 admin 用户时的 NPE 问题(hugegraph #1360)
    • 修复异步任务原子性操作问题,确保 update/get fields 及 re-schedule 的原子性(hugegraph #1361)
    • 修复权限 NONE 资源类型的问题(hugegraph #1362)
    • 修复启用权限后,truncate 操作报错 SecurityException 及管理员信息丢失问题(hugegraph #1365)
    • 修复启用权限后,解析数据忽略了权限异常的问题(hugegraph #1380)
    • 修复 AuthManager 在初始化时会尝试连接其它节点的问题(hugegraph #1381)
    • 修复特定的 shard 信息导致 base64 解码错误的问题(hugegraph #1383)
    • 修复启用权限后,使用 consistent-hash LB 在校验权限时,creator 为空的问题(hugegraph #1385)
    • 改进权限中 VAR 资源不再依赖于 VERTEX 资源(hugegraph #1386)
    • 规范启用权限后,Schema 操作仅依赖具体的资源(hugegraph #1387)
    • 规范启用权限后,部分操作由依赖 STATUS 资源改为依赖 ANY 资源(hugegraph #1391)
    • 规范启用权限后,禁止初始化管理员密码为空(hugegraph #1400)
    • 检查创建用户时 username/password 不允许为空(hugegraph #1402)
    • 修复更新 Label 时,PrimaryKey 或 SortKey 被设置为可空属性的问题(hugegraph #1406)
    • 修复 ScyllaDB 丢失分页结果问题(hugegraph #1407)
    • 修复带权最短路径算法(weighted shortest path)权重属性强制转换为 double 的问题(hugegraph #1432)
    • 统一 OLTP 算法中的 degree 参数命名(hugegraph #1433)
    • 修复 fusiformsimilarity 算法当 similars 为空的时候返回所有的顶点问题(hugegraph #1434)
    • 改进 paths 算法,当起始点与目标点相同时应该返回空路径(hugegraph #1435)
    • 修改 kout/kneighbor 的 limit 参数默认值 10 为 10000000(hugegraph #1436)
    • 修复分页信息中的 ‘+’ 被 URL 编码为空格的问题(hugegraph #1437)
    • 改进边更新接口的错误提示信息(hugegraph #1443)
    • 修复 kout 算法 degree 未在所有 label 范围生效的问题(hugegraph #1459)
    • 改进 kneighbor/kout 算法,起始点不允许出现在结果集中(hugegraph #1459 #1463)
    • 统一 kout/kneighbor 的 Get 和 Post 版本行为(hugegraph #1470)
    • 改进创建边时顶点类型不匹配的错误提示信息(hugegraph #1477)
    • 修复 Range Index 的残留索引问题(hugegraph #1498)
    • 修复权限操作未失效缓存的问题(hugegraph #1528)
    • 修复 sameneighbor 的 limit 参数默认值 10 为 10000000(hugegraph #1530)
    • 修复 clear API 不应该所有后端都调用 create snapshot 的问题(hugegraph #1532)
    • 修复当 loading 模式时创建 Index Label 阻塞问题(hugegraph #1548)
    • 修复增加图到 project 或从 project 移除图的问题(hugegraph #1562)
    • 改进权限操作的一些错误提示信息(hugegraph #1563)
    • 支持浮点属性设置为 Infinity/NaN 的值(hugegraph #1578)
    • 修复 Raft 启用 safe_read 时的 quorum read 问题(hugegraph #1618)
    • 修复 token 过期时间配置的单位问题(hugegraph #1625)
    • 修复 MySQL Statement 资源泄露问题(hugegraph #1627)
    • 修复竞争条件下 Schema.getIndexLabel 获取不到数据的问题(hugegraph #1629)
    • 修复 HugeVertex4Insert 无法系列化问题(hugegraph #1630)
    • 修复 MySQL count Statement 未关闭问题(hugegraph #1640)
    • 修复当删除 Index Label 异常时,导致状态不同步问题(hugegraph #1642)
    • 修复 MySQL 执行 gremlin timeout 导致的 statement 未关闭问题(hugegraph #1643)
    • 改进 Search Index 以兼容特殊 Unicode 字符:\u0000 to \u0003(hugegraph #1659)
    • 修复 #1659 引入的 Char 未转化为 String 的问题(hugegraph #1664)
    • 修复 has() + within() 查询时结果异常问题(hugegraph #1680)
    • 升级 Log4j 版本到 2.17 以修复安全漏洞(hugegraph #1686 #1698 #1702)
    • 修复 HBase 后端 shard scan 中 startkey 包含空串时 NPE 问题(hugegraph #1691)
    • 修复 paths 算法在深层环路遍历时性能下降问题 (hugegraph #1694)
    • 改进 personalrank 算法的参数默认值及错误检查(hugegraph #1695)
    • 修复 RESTful 接口 P.within 条件不生效问题(hugegraph #1704)
    • 修复启用权限时无法动态创建图的问题(hugegraph #1708)

    配置项修改:

    • 共享 SSL 相关配置项命名(hugegraph #1260)
    • 支持 RocksDB 配置项 rocksdb.level_compaction_dynamic_level_bytes(hugegraph #1262)
    • 去除 RESFful Server 服务协议配置项 restserver.protocol,自动提取 URL 中的 Schema(hugegraph #1272)
    • 增加 PostgreSQL 配置项 jdbc.postgresql.connect_database(hugegraph #1293)
    • 增加针对顶点主键是否编码的配置项 vertex.encode_primary_key_number(hugegraph #1323)
    • 增加针对聚合查询是否启用索引优化的配置项 query.optimize_aggregate_by_index(hugegraph #1549)
    • 修改 cache_type 的默认值 l1 为 l2(hugegraph #1681)
    • 增加 JDBC 强制重连配置项 jdbc.forced_auto_reconnect(hugegraph #1710)

    其它修改

    • 增加默认的 SSL Certificate 文件(hugegraph #1254)
    • OLTP 并行请求共享线程池,而非每个请求使用单独的线程池(hugegraph #1258)
    • 修复 Example 的问题(hugegraph #1308)
    • 使用 jraft 版本 1.3.5(hugegraph #1313)
    • 如果启用了 Raft 模式时,关闭 RocksDB 的 WAL(hugegraph #1318)
    • 使用 TarLz4Util 来提升快照 Snapshot 压缩的性能(hugegraph #1336)
    • 升级存储的版本号(store version),因为 property key 增加了 read frequency(hugegraph #1341)
    • 顶点/边 vertex/edge 的 Get API 使用 queryVertex/queryEdge 方法来替代 iterator 方法(hugegraph #1345)
    • 支持 BFS 优化的多度查询(hugegraph #1359)
    • 改进 RocksDB deleteRange() 带来的查询性能问题(hugegraph #1375)
    • 修复 travis-ci cannot find symbol Namifiable 问题(hugegraph #1376)
    • 确保 RocksDB 快照的磁盘与 data path 指定的一致(hugegraph #1392)
    • 修复 MacOS 空闲内存 free_memory 计算不准确问题(hugegraph #1396)
    • 增加 Raft onBusy 回调来配合限速(hugegraph #1401)
    • 升级 netty-all 版本 4.1.13.Final 到 4.1.42.Final(hugegraph #1403)
    • 支持 TaskScheduler 暂停当设置为 loading 模式时(hugegraph #1414)
    • 修复 raft-tools 脚本的问题(hugegraph #1416)
    • 修复 license params 问题(hugegraph #1420)
    • 提升写权限日志的性能,通过 batch flush & async write 方式改进(hugegraph #1448)
    • 增加 MySQL 连接 URL 的日志记录(hugegraph #1451)
    • 提升用户信息校验性能(hugegraph# 1460)
    • 修复 TTL 因为起始时间问题导致的错误(hugegraph #1478)
    • 支持日志配置的热加载及对审计日志的压缩(hugegraph #1492)
    • 支持针对用户级别的审计日志的限速(hugegraph #1493)
    • 缓存 RamCache 支持用户自定义的过期时间(hugegraph #1494)
    • 在 auth client 端缓存 login role 以避免重复的 RPC 调用(hugegraph #1507)
    • 修复 IdSet.contains() 未复写 AbstractCollection.contains() 问题(hugegraph #1511)
    • 修复当 commitPartOfEdgeDeletions() 失败时,未回滚 rollback 的问题(hugegraph #1513)
    • 提升 Cache metrics 性能(hugegraph #1515)
    • 当发生 license 操作错误时,增加打印异常日志(hugegraph #1522)
    • 改进 SimilarsMap 实现(hugegraph #1523)
    • 使用 tokenless 方式来更新 coverage(hugegraph #1529)
    • 改进 project update 接口的代码(hugegraph #1537)
    • 允许从 option() 访问 GRAPH_STORE(hugegraph #1546)
    • 优化 kout/kneighbor 的 count 查询以避免拷贝集合(hugegraph #1550)
    • 优化 shortestpath 遍历方式,以数据量少的一端优先遍历(hugegraph #1569)
    • 完善 rocksdb.data_disks 配置项的 allowed keys 提示信息(hugegraph #1585)
    • 为 number id 优化 OLTP 遍历中的 id2code 方法性能(hugegraph #1623)
    • 优化 HugeElement.getProperties() 返回 Collection<Property>(hugegraph #1624)
    • 增加 APACHE PROPOSAL 文件(hugegraph #1644)
    • 改进 close tx 的流程(hugegraph #1655)
    • 当 reset() 时为 MySQL close 捕获所有类型异常(hugegraph #1661)
    • 改进 OLAP property 模块代码(hugegraph #1675)
    • 改进查询模块的执行性能(hugegraph #1711)

    Loader

    • 支持导入 Parquet 格式文件(hugegraph-loader #174)
    • 支持 HDFS Kerberos 权限验证(hugegraph-loader #176)
    • 支持 HTTPS 协议连接到服务端导入数据(hugegraph-loader #183)
    • 修复 trust store file 路径问题(hugegraph-loader #186)
    • 处理 loading mode 重置的异常(hugegraph-loader #187)
    • 增加在插入数据时对非空属性的检查(hugegraph-loader #190)
    • 修复客户端与服务端时区不同导致的时间判断问题(hugegraph-loader #192)
    • 优化数据解析性能(hugegraph-loader #194)
    • 当用户指定了文件头时,检查其必须不为空(hugegraph-loader #195)
    • 修复示例程序中 MySQL struct.json 格式问题(hugegraph-loader #198)
    • 修复顶点边导入速度不精确的问题(hugegraph-loader #200 #205)
    • 当导入启用 check-vertex 时,确保先导入顶点再导入边(hugegraph-loader #206)
    • 修复边 Json 数据导入格式不统一时数组溢出的问题(hugegraph-loader #211)
    • 修复因边 mapping 文件不存在导致的 NPE 问题(hugegraph-loader #213)
    • 修复读取时间可能出现负数的问题(hugegraph-loader #215)
    • 改进目录文件的日志打印(hugegraph-loader #223)
    • 改进 loader 的的 Schema 处理流程(hugegraph-loader #230)

    Tools

    • 支持 HTTPS 协议(hugegraph-tools #71)
    • 移除 –protocol 参数,直接从URL中自动提取(hugegraph-tools #72)
    • 支持将数据 dump 到 HDFS 文件系统(hugegraph-tools #73)
    • 修复 trust store file 路径问题(hugegraph-tools #75)
    • 支持权限信息的备份恢复(hugegraph-tools #76)
    • 支持无参数的 Printer 打印(hugegraph-tools #79)
    • 修复 MacOS free_memory 计算问题(hugegraph-tools #82)
    • 支持备份恢复时指定线程数hugegraph-tools #83)
    • 支持动态创建图、克隆图、删除图等命令(hugegraph-tools #95)

    2 - HugeGraph 0.11 Release Notes

    API & Client

    功能更新

    • 支持梭形相似度算法(hugegraph #671,hugegraph-client #62)
    • 支持创建 Schema 时,记录创建的时间(hugegraph #746,hugegraph-client #69)
    • 支持 RESTful API 中基于属性的范围查询顶点/边(hugegraph #782,hugegraph-client #73)
    • 支持顶点和边的 TTL (hugegraph #794,hugegraph-client #83)
    • 统一 RESTful API Server 和 Gremlin Server 的日期格式为字符串(hugegraph #1014,hugegraph-client #82)
    • 支持共同邻居,Jaccard 相似度,全部最短路径,带权最短路径和单源最短路径5种遍历算法(hugegraph #936,hugegraph-client #80)
    • 支持用户认证和细粒度权限控制(hugegraph #749,hugegraph #985,hugegraph-client #81)
    • 支持遍历 API 的顶点计数功能(hugegraph #995,hugegraph-client #84)
    • 支持 HTTPS 协议(hugegrap #1036,hugegraph-client #85)
    • 支持创建索引时控制是否重建索引(hugegraph #1106,hugegraph-client #91)
    • 支持定制的 kout/kneighbor,多点最短路径,最相似 Jaccard 点和模板路径5种遍历算法(hugegraph #1174,hugegraph-client #100,hugegraph-client #106)

    内部修改

    • 启动 HugeGraphServer 出现异常时快速失败(hugegraph #748)
    • 定义 LOADING 模式来加速导入(hugegraph-client #101)

    Core

    功能更新

    • 支持多属性顶点/边的分页查询(hugegraph #759)
    • 支持聚合运算的性能优化(hugegraph #813)
    • 支持堆外缓存(hugegraph #846)
    • 支持属性权限管理(hugegraph #971)
    • 支持 MySQL 和 Memory 后端分片,并改进 HBase 分片方法(hugegraph #974)
    • 支持基于 Raft 的分布式一致性协议(hugegraph #1020)
    • 支持元数据拷贝功能(hugegraph #1024)
    • 支持集群的异步任务调度功能(hugegraph #1030)
    • 支持发生 OOM 时打印堆信息功能(hugegraph #1093)
    • 支持 Raft 状态机更新缓存(hugegraph #1119)
    • 支持 Raft 节点管理功能(hugegraph #1137)
    • 支持限制查询请求速率的功能(hugegraph #1158)
    • 支持顶点/边的属性默认值功能(hugegraph #1182)
    • 支持插件化查询加速机制 RamTable(hugegraph #1183)
    • 支持索引重建失败时设置为 INVALID 状态(hugegraph #1226)
    • 支持 HBase 启用 Kerberos 认证(hugegraph #1234)

    BUG修复

    • 修复配置权限时 start-hugegraph.sh 的超时问题(hugegraph #761)
    • 修复在 studio 执行 gremlin 时的 MySQL 连接失败问题(hugegraph #765)
    • 修复 HBase 后端 truncate 时出现的 TableNotFoundException(hugegraph #771)
    • 修复限速配置项值未检查的问题(hugegraph #773)
    • 修复唯一索引(Unique Index)的返回的异常信息不准确问题(hugegraph #797)
    • 修复 RocksDB 后端执行 g.V().hasLabel().count() 时 OOM 问题 (hugegraph-798)
    • 修复 traverseByLabel() 分页设置错误问题(hugegraph #805)
    • 修复根据 ID 和 SortKeys 更新边属性时误创建边的问题(hugegraph #819)
    • 修复部分存储后端的覆盖写问题(hugegraph #820)
    • 修复保存执行失败的异步任务时无法取消的问题(hugegraph #827)
    • 修复 MySQL 后端在 SSL 模式下无法打开数据库的问题(hugegraph #842)
    • 修复索引查询时 offset 无效问题(hugegraph #866)
    • 修复 Gremlin 中绝对路径泄露的安全问题(hugegraph #871)
    • 修复 reconnectIfNeeded() 方法的 NPE 问题(hugegraph #874)
    • 修复 PostgreSQL 的 JDBC_URL 配置没有"/“前缀的问题(hugegraph #891)
    • 修复 RocksDB 内存统计问题(hugegraph #937)
    • 修复环路检测的两点成环无法检测的问题(hugegraph #939)
    • 修复梭形算法计算结束后没有清理计数的问题(hugegraph #947)
    • 修复 gremlin-console 无法工作的问题(hugegraph #1027)
    • 修复限制数目的按条件过滤邻接边问题(hugegraph #1057)
    • 修复 MySQL 执行 SQL 时的 auto-commit 问题(hugegraph #1064)
    • 修复通过两个索引查询时发生超时 80w 限制的问题(hugegraph #1088)
    • 修复范围索引检查规则错误(hugegraph #1090)
    • 修复删除残留索引的错误(hugegraph #1101)
    • 修复当前线程为 task-worker 时关闭事务卡住的问题(hugegraph #1111)
    • 修复最短路径查询出现 NoSuchElementException 的问题(hugegraph #1116)
    • 修复异步任务有时提交两次的问题(hugegraph #1130)
    • 修复值很小的 date 反序列化的问题(hugegraph #1152)
    • 修复遍历算法未检查起点或者终点是否存在的问题(hugegraph #1156)
    • 修复 bin/start-hugegraph.sh 参数解析错误的问题(hugegraph #1178)
    • 修复 gremlin-console 运行时的 log4j 错误信息的问题(hugegraph #1229)

    内部修改

    • 延迟检查非空属性(hugegraph #756)
    • 为存储后端增加查看集群节点信息的功能 (hugegraph #821)
    • 为 RocksDB 后端增加 compaction 高级配置项(hugegraph #825)
    • 增加 vertex.check_adjacent_vertex_exist 配置项(hugegraph #837)
    • 检查主键属性不允许为空(hugegraph #847)
    • 增加图名字的合法性检查(hugegraph #854)
    • 增加对非预期的 SysProp 的查询(hugegraph #862)
    • 使用 disableTableAsync 加速 HBase 后端的数据清除(hugegraph #868)
    • 允许 Gremlin 环境触发系统异步任务(hugegraph #892)
    • 编码字符类型索引中的类型 ID(hugegraph #894)
    • 安全模块允许 Cassandra 在执行 CQL 时按需创建线程(hugegraph #896)
    • 将 GremlinServer 的默认通道设置为 WsAndHttpChannelizer(hugegraph #903)
    • 将 Direction 和遍历算法的类导出到 Gremlin 环境(hugegraph #904)
    • 增加顶点属性缓存限制(hugegraph #941,hugegraph #942)
    • 优化列表属性的读(hugegraph #943)
    • 增加缓存的 L1 和 L2 配置(hugegraph #945)
    • 优化 EdgeId.asString() 方法(hugegraph #946)
    • 优化当顶点没有属性时跳过后端存储查询(hugegraph #951)
    • 创建名字相同但属性不同的元数据时抛出 ExistedException(hugegraph #1009)
    • 查询顶点和边后按需关闭事务(hugegraph #1039)
    • 当图关闭时清空缓存(hugegraph #1078)
    • 关闭图时加锁避免竞争问题(hugegraph #1104)
    • 优化顶点和边的删除效率,当提供 Label+ID 删除时免去查询(hugegraph #1150)
    • 使用 IntObjectMap 优化元数据缓存效率(hugegraph #1185)
    • 使用单个 Raft 节点管理目前的三个 store(hugegraph #1187)
    • 在重建索引时提前释放索引删除的锁(hugegraph #1193)
    • 在压缩和解压缩异步任务的结果时,使用 LZ4 替代 Gzip(hugegraph #1198)
    • 实现 RocksDB 删除 CF 操作的排他性来避免竞争(hugegraph #1202)
    • 修改 CSV reporter 的输出目录,并默认设置为不输出(hugegraph #1233)

    其它

    • cherry-pick 0.10.4 版本的 bug 修复代码(hugegraph #785,hugegraph #1047)
    • Jackson 升级到 2.10.2 版本(hugegraph #859)
    • Thanks 信息中增加对 Titan 的感谢(hugegraph #906)
    • 适配 TinkerPop 测试(hugegraph #1048)
    • 修改允许输出的日志最低等级为 TRACE(hugegraph #1050)
    • 增加 IDEA 的格式配置文件(hugegraph #1060)
    • 修复 Travis CI 太多错误信息的问题(hugegraph #1098)

    Loader

    功能更新

    • 支持读取 Hadoop 配置文件(hugegraph-loader #105)
    • 支持指定 Date 属性的时区(hugegraph-loader #107)
    • 支持从 ORC 压缩文件导入数据(hugegraph-loader #113)
    • 支持单条边插入时设置是否检查顶点(hugegraph-loader #117)
    • 支持从 Snappy-raw 压缩文件导入数据(hugegraph-loader #119)
    • 支持导入映射文件 2.0 版本(hugegraph-loader #121)
    • 增加一个将 utf8-bom 转换为 utf8 的命令行工具(hugegraph-loader #128)
    • 支持导入任务开始前清理元数据信息的功能(hugegraph-loader #140)
    • 支持 id 列作为属性存储(hugegraph-loader #143)
    • 支持导入任务配置 username(hugegraph-loader #146)
    • 支持从 Parquet 文件导入数据(hugegraph-loader #153)
    • 支持指定读取文件的最大行数(hugegraph-loader #159)
    • 支持 HTTPS 协议(hugegraph-loader #161)
    • 支持时间戳作为日期格式(hugegraph-loader #164)

    BUG修复

    • 修复行的 retainAll() 方法没有修改 names 和 values 数组(hugegraph-loader #110)
    • 修复 JSON 文件重新加载时的 NPE 问题(hugegraph-loader #112)

    内部修改

    • 只打印一次插入错误信息,以避免过多的错误信息(hugegraph-loader #118)
    • 拆分批量插入和单条插入的线程(hugegraph-loader #120)
    • CSV 的解析器改为 SimpleFlatMapper(hugegraph-loader #124)
    • 编码主键中的数字和日期字段(hugegraph-loader #136)
    • 确保主键列合法或者存在映射(hugegraph-loader #141)
    • 跳过主键属性全部为空的顶点(hugegraph-loader #166)
    • 在导入任务开始前设置为 LOADING 模式,并在导入完成后恢复原来模式(hugegraph-loader #169)
    • 改进停止导入任务的实现(hugegraph-loader #170)

    Tools

    功能更新

    • 支持 Memory 后端的备份功能 (hugegraph-tools #53)
    • 支持 HTTPS 协议(hugegraph-tools #58)
    • 支持 migrate 子命令配置用户名和密码(hugegraph-tools #61)
    • 支持备份顶点和边时指定类型和过滤属性信息(hugegraph-tools #63)

    BUG修复

    • 修复 dump 命令的 NPE 问题(hugegraph-tools #49)

    内部修改

    • 在 backup/dump 之前清除分片文件(hugegraph-tools #53)
    • 改进 HugeGraph-tools 的报错信息(hugegraph-tools #67)
    • 改进 migrate 子命令,删除掉不支持的子配置(hugegraph-tools #68)

    3 - HugeGraph 0.10 Release Notes

    API & Client

    功能更新

    • 支持 HugeGraphServer 服务端内存紧张时返回错误拒绝请求 (hugegraph #476)
    • 支持 API 白名单和 HugeGraphServer GC 频率控制功能 (hugegraph #522)
    • 支持 Rings API 的 source_in_ring 参数 (hugegraph #528,hugegraph-client #48)
    • 支持批量按策略更新属性接口 (hugegraph #493,hugegraph-client #46)
    • 支持 Shard Index 前缀与范围检索索引 (hugegraph #574,hugegraph-client #56)
    • 支持顶点的 UUID ID 类型 (hugegraph #618,hugegraph-client #59)
    • 支持唯一性约束索引(Unique Index) (hugegraph #636,hugegraph-client #60)
    • 支持 API 请求超时功能 (hugegraph #674)
    • 支持根据名称列表查询 schema (hugegraph #686,hugegraph-client #63)
    • 支持按分页方式获取异步任务 (hugegraph #720)

    内部修改

    • 保持 traverser 的参数与 server 端一致 (hugegraph-client #44)
    • 支持在 Shard 内使用分页方式遍历顶点或者边的方法 (hugegraph-client #47)
    • 支持 Gremlin 查询结果持有 GraphManager (hugegraph-client #49)
    • 改进 RestClient 的连接参数 (hugegraph-client #52)
    • 增加 Date 类型属性的测试 (hugegraph-client #55)
    • 适配 HugeGremlinException 异常 (hugegraph-client #57)
    • 增加新功能的版本匹配检查 (hugegraph-client #66)
    • 适配 UUID 的序列化 (hugegraph-client #67)

    Core

    功能更新

    • 支持 PostgreSQL 和 CockroachDB 存储后端 (hugegraph #484)
    • 支持负数索引 (hugegraph #513)
    • 支持边的 Vertex + SortKeys 的前缀范围查询 (hugegraph #574)
    • 支持顶点的邻接边按分页方式查询 (hugegraph #659)
    • 禁止通过 Gremlin 进行敏感操作 (hugegraph #176)
    • 支持 Lic 校验功能 (hugegraph #645)
    • 支持 Search Index 查询结果按匹配度排序的功能 (hugegraph #653)
    • 升级 tinkerpop 至版本 3.4.3 (hugegraph #648)

    BUG修复

    • 修复按分页方式查询边时剩余数目(remaining count)错误 (hugegraph #515)
    • 修复清空后端时边缓存未清空的问题 (hugegraph #488)
    • 修复无法插入 List 类型的属性问题 (hugegraph #534)
    • 修复 PostgreSQL 后端的 existDatabase(), clearBackend() 和 rollback()功能 (hugegraph #531)
    • 修复程序关闭时 HugeGraphServer 和 GremlinServer 残留问题 (hugegraph #554)
    • 修复在 LockTable 中重复抓锁的问题 (hugegraph #566)
    • 修复从 Edge 中获取的 Vertex 没有属性的问题 (hugegraph #604)
    • 修复交叉关闭 RocksDB 的连接池问题 (hugegraph #598)
    • 修复在超级点查询时 limit 失效问题 (hugegraph #607)
    • 修复使用 Equal 条件和分页的情况下查询 Range Index 只返回第一页的问题 (hugegraph #614)
    • 修复查询 limit 在删除部分数据后失效的问题 (hugegraph #610)
    • 修复 Example1 的查询错误 (hugegraph #638)
    • 修复 HBase 的批量提交部分错误问题 (hugegraph #634)
    • 修复索引搜索时 compareNumber() 方法的空指针问题 (hugegraph #629)
    • 修复更新属性值为已经删除的顶点或边的属性时失败问题 (hugegraph #679)
    • 修复 system 类型残留索引无法清除问题 (hugegraph #675)
    • 修复 HBase 在 Metrics 信息中的单位问题 (hugegraph #713)
    • 修复存储后端未初始化问题 (hugegraph #708)
    • 修复按 Label 删除边时导致的 IN 边残留问题 (hugegraph #727)
    • 修复 init-store 会生成多份 backend_info 问题 (hugegraph #723)

    内部修改

    • 抑制因 PostgreSQL 后端 database 不存在时的报警信息 (hugegraph #527)
    • 删除 PostgreSQL 后端的无用配置项 (hugegraph #533)
    • 改进错误信息中的 HugeType 为易读字符串 (hugegraph #546)
    • 增加 jdbc.storage_engine 配置项指定存储引擎 (hugegraph #555)
    • 增加使用后端链接时按需重连功能 (hugegraph #562)
    • 避免打印空的查询条件 (hugegraph #583)
    • 缩减 Variable 的字符串长度 (hugegraph #581)
    • 增加 RocksDB 后端的 cache 配置项 (hugegraph #567)
    • 改进异步任务的异常信息 (hugegraph #596)
    • 将 Range Index 拆分成 INT,LONG,FLOAT,DOUBLE 四个表存储 (hugegraph #574)
    • 改进顶点和边 API 的 Metrics 名字 (hugegraph #631)
    • 增加 G1GC 和 GC Log 的配置项 (hugegraph #616)
    • 拆分顶点和边的 Label Index 表 (hugegraph #635)
    • 减少顶点和边的属性存储空间 (hugegraph #650)
    • 支持对 Secondary Index 和 Primary Key 中的数字进行编码 (hugegraph #676)
    • 减少顶点和边的 ID 存储空间 (hugegraph #661)
    • 支持 Cassandra 后端存储的二进制序列化存储 (hugegraph #680)
    • 放松对最小内存的限制 (hugegraph #689)
    • 修复 RocksDB 后端批量写时的 Invalid column family 问题 (hugegraph #701)
    • 更新异步任务状态时删除残留索引 (hugegraph #719)
    • 删除 ScyllaDB 的 Label Index 表 (hugegraph #717)
    • 启动时使用多线程方式打开 RocksDB 后端存储多个数据目录 (hugegraph #721)
    • RocksDB 版本从 v5.17.2 升级至 v6.3.6 (hugegraph #722)

    其它

    • 增加 API tests 到 codecov 统计中 (hugegraph #711)
    • 改进配置文件的默认配置项 (hugegraph #575)
    • 改进 README 中的致谢信息 (hugegraph #548)

    Loader

    功能更新

    • 支持 JSON 数据源的 selected 字段 (hugegraph-loader #62)
    • 支持定制化 List 元素之间的分隔符 (hugegraph-loader #66)
    • 支持值映射 (hugegraph-loader #67)
    • 支持通过文件后缀过滤文件 (hugegraph-loader #82)
    • 支持对导入进度进行记录和断点续传 (hugegraph-loader #70,hugegraph-loader #87)
    • 支持从不同的关系型数据库中读取 Header 信息 (hugegraph-loader #79)
    • 支持属性为 Unsigned Long 类型值 (hugegraph-loader #91)
    • 支持顶点的 UUID ID 类型 (hugegraph-loader #98)
    • 支持按照策略批量更新属性 (hugegraph-loader #97)

    BUG修复

    • 修复 nullable key 在 mapping field 不工作的问题 (hugegraph-loader #64)
    • 修复 Parse Exception 无法捕获的问题 (hugegraph-loader #74)
    • 修复在等待异步任务完成时获取信号量数目错误的问题 (hugegraph-loader #86)
    • 修复空表时 hasNext() 返回 true 的问题 (hugegraph-loader #90)
    • 修复布尔值解析错误问题 (hugegraph-loader #92)

    内部修改

    • 增加 HTTP 连接参数 (hugegraph-loader #81)
    • 改进导入完成的总结信息 (hugegraph-loader #80)
    • 改进一行数据缺少列或者有多余列的处理逻辑 (hugegraph-loader #93)

    Tools

    功能更新

    • 支持 0.8 版本 server 备份的数据恢复至 0.9 版本的 server 中 (hugegraph-tools #34)
    • 增加 timeout 全局参数 (hugegraph-tools #44)
    • 增加 migrate 子命令支持迁移图 (hugegraph-tools #45)

    BUG修复

    • 修复 dump 命令不支持 split size 参数的问题 (hugegraph-tools #32)

    内部修改

    • 删除 Hadoop 对 Jersey 1.19的依赖 (hugegraph-tools #31)
    • 优化子命令在 help 信息中的排序 (hugegraph-tools #37)
    • 使用 log4j2 清除 log4j 的警告信息 (hugegraph-tools #39)

    4 - HugeGraph 0.9 Release Notes

    API & Client

    功能更新

    • 增加 personal rank API 和 neighbor rank API (hugegraph #274)
    • Shortest path API 增加 skip_degree 参数跳过超级点(hugegraph #433,hugegraph-client #42)
    • vertex/edge 的 scan API 支持分页机制 (hugegraph #428,hugegraph-client #35)
    • VertexAPI 使用简化的属性序列化器 (hugegraph #332,hugegraph-client #37)
    • 增加 customized paths API 和 customized crosspoints API (hugegraph #306,hugegraph-client #40)
    • 在 server 端所有线程忙时返回503错误 (hugegraph #343)
    • 保持 API 的 depth 和 degree 参数一致 (hugegraph #252,hugegraph-client #30)

    BUG修复

    • 增加属性的时候验证 Date 而非 Timestamp 的值 (hugegraph-client #26)

    内部修改

    • RestClient 支持重用连接 (hugegraph-client #33)
    • 使用 JsonUtil 替换冗余的 ObjectMapper (hugegraph-client #41)
    • Edge 直接引用 Vertex 使得批量插入更友好 (hugegraph-client #29)
    • 使用 JaCoCo 替换 Cobertura 统计代码覆盖率 (hugegraph-client #39)
    • 改进 Shard 反序列化机制 (hugegraph-client #34)

    Core

    功能更新

    • 支持 Cassandra 的 NetworkTopologyStrategy (hugegraph #448)
    • 元数据删除和索引重建使用分页机制 (hugegraph #417)
    • 支持将 HugeGraphServer 作为系统服务 (hugegraph #170)
    • 单一索引查询支持分页机制 (hugegraph #328)
    • 在初始化图库时支持定制化插件 (hugegraph #364)
    • 为HBase后端增加 hbase.zookeeper.znode.parent 配置项 (hugegraph #333)
    • 支持异步 Gremlin 任务的进度更新 (hugegraph #325)
    • 使用异步任务的方式删除残留索引 (hugegraph #285)
    • 支持按 sortKeys 范围查找功能 (hugegraph #271)

    BUG修复

    • 修复二级索引删除时 Cassandra 后端的 batch 超过65535限制的问题 (hugegraph #386)
    • 修复 RocksDB 磁盘利用率的 metrics 不正确问题 (hugegraph #326)
    • 修复异步索引删除错误修复 (hugegraph #336)
    • 修复 BackendSessionPool.close() 的竞争条件问题 (hugegraph #330)
    • 修复保留的系统 ID 不工作问题 (hugegraph #315)
    • 修复 cache 的 metrics 信息丢失问题 (hugegraph #321)
    • 修复使用 hasId() 按 id 查询顶点时不支持数字 id 问题 (hugegraph #302)
    • 修复重建索引时的 80w 限制问题和 Cassandra 后端的 batch 65535问题 (hugegraph #292)
    • 修复残留索引删除无法处理未展开(none-flatten)查询的问题 (hugegraph #281)

    内部修改

    • 迭代器变量统一命名为 ‘iter’(hugegraph #438)
    • 增加 PageState.page() 方法统一获取分页信息接口 (hugegraph #429)
    • 为基于 mapdb 的内存版后端调整代码结构,增加测试用例 (hugegraph #357)
    • 支持代码覆盖率统计 (hugegraph #376)
    • 设置 tx capacity 的下限为 COMMIT_BATCH(默认为500) (hugegraph #379)
    • 增加 shutdown hook 来自动关闭线程池 (hugegraph #355)
    • PerfExample 的统计时间排除环境初始化时间 (hugegraph #329)
    • 改进 BinarySerializer 中的 schema 序列化 (hugegraph #316)
    • 避免对 primary key 的属性创建多余的索引 (hugegraph #317)
    • 限制 Gremlin 异步任务的名字小于256字节 (hugegraph #313)
    • 使用 multi-get 优化 HBase 后端的按 id 查询 (hugegraph #279)
    • 支持更多的日期数据类型 (hugegraph #274)
    • 修改 Cassandra 和 HBase 的 port 范围为(1,65535) (hugegraph #263)

    其它

    • 增加 travis API 测试 (hugegraph #299)
    • 删除 rest-server.properties 中的 GremlinServer 相关的默认配置项 (hugegraph #290)

    Loader

    功能更新

    • 支持从 HDFS 和 关系型数据库导入数据 (hugegraph-loader #14)
    • 支持传递权限 token 参数(hugegraph-loader #46)
    • 支持通过 regex 指定要跳过的行 (hugegraph-loader #43)
    • 支持导入 TEXT 文件时的 List/Set 属性(hugegraph-loader #38)
    • 支持自定义的日期格式 (hugegraph-loader #28)
    • 支持从指定目录导入数据 (hugegraph-loader #33)
    • 支持忽略最后多余的列或者 null 值的列 (hugegraph-loader #23)

    BUG修复

    • 修复 Example 问题(hugegraph-loader #57)
    • 修复当 vertex 是 customized ID 策略时边解析问题(hugegraph-loader #24)

    内部修改

    • URL regex 改进 (hugegraph-loader #47)

    Tools

    功能更新

    • 支持海量数据备份和恢复到本地和 HDFS,并支持压缩 (hugegraph-tools #21)
    • 支持异步任务取消和清理功能 (hugegraph-tools #20)
    • 改进 graph-clear 命令的提示信息 (hugegraph-tools #23)

    BUG修复

    • 修复 restore 命令总是使用 ‘hugegraph’ 作为目标图的问题,支持指定图 (hugegraph-tools #26)

    5 - HugeGraph 0.8 Release Notes

    API & Client

    功能更新

    • 服务端增加 rays 和 rings 的 RESTful API(hugegraph #45)
    • 使创建 IndexLabel 返回异步任务(hugegraph #95,hugegraph-client #9)
    • 客户端增加恢复模式相关的 API(hugegraph-client #10)
    • 让 task-list API 不返回 task_input 和 task_result(hugegraph #143)
    • 增加取消异步任务的API(hugegraph #167,hugegraph-client #15)
    • 增加获取后端 metrics 的 API(hugegraph #155)

    BUG修复

    • 分页获取时最后一页的 page 应该为 null 而非 “null”(hugegraph #168)
    • 分页迭代获取服务端已经没有下一页了应该停止获取(hugegraph-client #16)
    • 添加顶点使用自定义 Number Id 时报类型无法转换(hugegraph-client #21)

    内部修改

    • 增加持续集成测试(hugegraph-client #19)

    Core

    功能更新

    • 取消异步任务通过 label 查询时 80w 的限制(hugegraph #93)
    • 允许 cardinality 为 set 时传入 Json List 形式的属性值(hugegraph #109)
    • 支持在恢复模式和合并模式来恢复图(hugegraph #114)
    • RocksDB 后端支持多个图指定为同一个存储目录(hugegraph #123)
    • 支持用户自定义权限认证器(hugegraph-loader #133)
    • 当服务重启后重新开始未完成的任务(hugegraph #188)
    • 当顶点的 Id 策略为自定义时,检查是否已存在相同 Id 的顶点(hugegraph #189)

    BUG修复

    • 增加对 HasContainer 的 predicate 不为 null 的检查(hugegraph #16)
    • RocksDB 后端由于数据目录和日志目录错误导致 init-store 失败(hugegraph #25)
    • 启动 hugegraph 时由于 logs 目录不存在导致提示超时但实际可访问(hugegraph #38)
    • ScyllaDB 后端遗漏注册顶点表(hugegraph #47)
    • 使用 hasLabel 查询传入多个 label 时失败(hugegraph #50)
    • Memory 后端未初始化 task 相关的 schema(hugegraph #100)
    • 当使用 hasLabel 查询时,如果元素数量超过 80w,即使加上 limit 也会报错(hugegraph #104)
    • 任务的在运行之后没有保存过状态(hugegraph #113)
    • 检查后端版本信息时直接强转 HugeGraphAuthProxy 为 HugeGraph(hugegraph #127)
    • 配置项 batch.max_vertices_per_batch 未生效(hugegraph #130)
    • 配置文件 rest-server.properties 有错误时 HugeGraphServer 启动不报错,但是无法访问(hugegraph #131)
    • MySQL 后端某个线程的提交对其他线程不可见(hugegraph #163)
    • 使用 union(branch) + has(date) 查询时提示 String 无法转换为 Date(hugegraph #181)
    • 使用 RocksDB 后端带 limit 查询顶点时会返回不完整的结果(hugegraph #197)
    • 提示其他线程无法操作 tx(hugegraph #204)

    内部修改

    • 拆分 graph.cache_xx 配置项为 vertex.cache_xx 和 edge.cache_xx 两类(hugegraph #56)
    • 去除 hugegraph-dist 对 hugegraph-api 的依赖(hugegraph #61)
    • 优化集合取交集和取差集的操作(hugegraph #85)
    • 优化 transaction 的缓存处理和索引及 Id 查询(hugegraph #105)
    • 给各线程池的线程命名(hugegraph #124)
    • 增加并优化了一些 metrics 统计(hugegraph #138)
    • 增加了对未完成任务的 metrics 记录(hugegraph #141)
    • 让索引更新以分批方式提交,而不是全量提交(hugegraph #150)
    • 在添加顶点/边时一直持有 schema 的读锁,直到提交/回滚完成(hugegraph #180)
    • 加速 Tinkerpop 测试(hugegraph #19)
    • 修复 Tinkerpop 测试在 resource 目录下找不到 filter 文件的 BUG(hugegraph #26)
    • 开启 Tinkerpop 测试中 supportCustomIds 特性(hugegraph #69)
    • 持续集成中添加 HBase 后端的测试(hugegraph #41)
    • 避免持续集成的 deploy 脚本运行多次(hugegraph #170)
    • 修复 cache 单元测试跑不过的问题(hugegraph #177)
    • 持续集成中修改部分后端的存储为 tmpfs 以加快测试速度(hugegraph #206)

    其它

    • 增加 issue 模版(hugegraph #42)
    • 增加 CONTRIBUTING 文件(hugegraph #59)

    Loader

    功能更新

    • 支持忽略源文件某些特定列(hugegraph-loader #2)
    • 支持导入 cardinality 为 Set 的属性数据(hugegraph-loader #10)
    • 单条插入也使用多个线程执行,解决了错误多时最后单条导入慢的问题(hugegraph-loader #12)

    BUG修复

    • 导入过程可能统计出错(hugegraph-loader #4)
    • 顶点使用自定义 Number Id 导入出错(hugegraph-loader #6)
    • 顶点使用联合主键时导入出错(hugegraph-loader #18)

    内部修改

    • 增加持续集成测试(hugegraph-loader #8)
    • 优化检测到文件不存在时的提示信息(hugegraph-loader #16)

    Tools

    功能更新

    • 增加 KgDumper (hugegraph-tools #6)
    • 支持在恢复模式和合并模式中恢复图(hugegraph-tools #9)

    BUG修复

    • 脚本中的工具函数 get_ip 在系统未安装 ifconfig 时报错(hugegraph-tools #13)

    6 - HugeGraph 0.7 Release Notes

    API & Java Client

    功能更新

    • 支持异步删除元数据和重建索引(HugeGraph-889)
    • 加入监控API,并与Gremlin的监控框架集成(HugeGraph-1273)

    BUG修复

    • EdgeAPI更新属性时会将属性值也置为属性键(HugeGraph-81)
    • 当删除顶点或边时,如果id非法应该返回400错误而非404(HugeGraph-1337)

    Core

    功能更新

    • 支持HBase后端存储(HugeGraph-1280)
    • 增加异步API框架,耗时操作可通过调用异步API实现(HugeGraph-387)
    • 支持对长属性列建立二级索引,取消目前索引列长度256字节的限制(HugeGraph-1314)
    • 支持顶点属性的“创建或更新”操作(HugeGraph-1303)
    • 支持全文检索功能(HugeGraph-1322)
    • 支持数据库表的版本号检查(HugeGraph-1328)
    • 删除顶点时,如果遇到超级点的时候报错"Batch too large"或“Batch 65535 statements”(HugeGraph-1354)
    • 支持异步删除元数据和重建索引(HugeGraph-889)
    • 支持异步长时间执行Gremlin任务(HugeGraph-889)

    BUG修复

    • 防止超级点访问时查询过多下一层顶点而阻塞服务(HugeGraph-1302)
    • HBase初始化时报错连接已经关闭(HugeGraph-1318)
    • 按照date属性过滤顶点报错String无法转为Date(HugeGraph-1319)
    • 残留索引删除,对range索引的判断存在错误(HugeGraph-1291)
    • 支持组合索引后,残留索引清理没有考虑索引组合的情况(HugeGraph-1311)
    • 根据otherV的条件来删除边时,可能会因为边的顶点不存在导致错误(HugeGraph-1347)
    • label索引对offset和limit结果错误(HugeGraph-1329)
    • vertex label或者edge label没有开启label index,删除label会导致数据无法删除(HugeGraph-1355)

    内部修改

    • hbase后端代码引入较新版本的Jackson-databind包,导致HugeGraphServer启动异常(HugeGraph-1306)
    • Core和Client都自己持有一个shard类,而不是依赖于common模块(HugeGraph-1316)
    • 去掉rebuild index和删除vertex label和edge label时的80w的capacity限制(HugeGraph-1297)
    • 所有schema操作需要考虑同步问题(HugeGraph-1279)
    • 拆分Cassandra的索引表,把element id每条一行,避免聚合高时,导入速度非常慢甚至卡住(HugeGraph-1304)
    • 将hugegraph-test中关于common的测试用例移动到hugegraph-common中(HugeGraph-1297)
    • 异步任务支持保存任务参数,以支持任务恢复(HugeGraph-1344)
    • 支持通过脚本部署文档到GitHub(HugeGraph-1351)
    • RocksDB和Hbase后端索引删除实现(HugeGraph-1317)

    Loader

    功能更新

    • HugeLoader支持用户手动创建schema,以文件的方式传入(HugeGraph-1295)

    BUG修复

    • HugeLoader导数据时未区分输入文件的编码,导致可能产生乱码(HugeGraph-1288)
    • HugeLoader打包的example目录的三个子目录下没有文件(HugeGraph-1288)
    • 导入的CSV文件中如果数据列本身包含逗号会解析出错(HugeGraph-1320)
    • 批量插入避免单条失败导致整个batch都无法插入(HugeGraph-1336)
    • 异常信息作为模板打印异常(HugeGraph-1345)
    • 导入边数据,当列数不对时导致程序退出(HugeGraph-1346)
    • HugeLoader的自动创建schema失败(HugeGraph-1363)
    • ID长度检查应该检查字节长度而非字符串长度(HugeGraph-1374)

    内部修改

    • 添加测试用例(HugeGraph-1361)

    Tools

    功能更新

    • backup/restore使用多线程加速,并增加retry机制(HugeGraph-1307)
    • 一键部署支持传入路径以存放包(HugeGraph-1325)
    • 实现dump图功能(内存构建顶点及关联边)(HugeGraph-1339)
    • 增加backup-scheduler功能,支持定时备份且保留一定数目最新备份(HugeGraph-1326)
    • 增加异步任务查询和异步执行Gremlin的功能(HugeGraph-1357)

    BUG修复

    • hugegraph-tools的backup和restore编码为UTF-8(HugeGraph-1321)
    • hugegraph-tools设置默认JVM堆大小和发布版本号(HugeGraph-1340)

    Studio

    BUG修复

    • HugeStudio中顶点id包含换行符时g.V()会导致groovy解析出错(HugeGraph-1292)
    • 限制返回的顶点及边的数量(HugeGraph-1333)
    • 加载note出现消失或者卡住情况(HugeGraph-1353)
    • HugeStudio打包时,编译失败但没有报错,导致发布包无法启动(HugeGraph-1368)

    7 - HugeGraph 0.6 Release Notes

    API & Java Client

    功能更新

    • 增加RESTFul API paths和crosspoints,找出source到target顶点间多条路径或包含交叉点的路径(HugeGraph-1210)
    • 在API层添加批量插入并发数的控制,避免出现全部的线程都用于写而无法查询的情况(HugeGraph-1228)
    • 增加scan-API,允许客户端并发地获取顶点和边(HugeGraph-1197)
    • Client支持传入用户名密码访问带权限控制的HugeGraph(HugeGraph-1256)
    • 为顶点及边的list API添加offset参数(HugeGraph-1261)
    • RESTful API的顶点/边的list不允许同时传入page 和 [label,属性](HugeGraph-1262)
    • k-out、K-neighbor、paths、shortestpath等API增加degree、capacity和limit(HugeGraph-1176)
    • 增加restore status的set/get/clear接口(HugeGraph-1272)

    BUG修复

    • 使 RestClient的basic auth使用Preemptive模式(HugeGraph-1257)
    • HugeGraph-Client中由ResultSet获取多次迭代器,除第一次外其他的无法迭代(HugeGraph-1278)

    Core

    功能更新

    • RocksDB实现scan特性(HugeGraph-1198)
    • Schema userdata 提供删除 key 功能(HugeGraph-1195)
    • 支持date类型属性的范围查询(HugeGraph-1208)
    • limit下沉到backend,尽可能不进行多余的索引读取(HugeGraph-1234)
    • 增加 API 权限与访问控制(HugeGraph-1162)
    • 禁止多个后端配置store为相同的值(HugeGraph-1269)

    BUG修复

    • RocksDB的Range查询时如果只指定上界或下界会查出其他IndexLabel的记录(HugeGraph-1211)
    • RocksDB带limit查询时,graphTransaction查询返回的结果多一个(HugeGraph-1234)
    • init-store在CentOS上依赖通用的io.netty有时会卡住,改为使用netty-transport-native-epoll(HugeGraph-1255)
    • Cassandra后端in语句(按id查询)元素个数最大65535(HugeGraph-1239)
    • 主键加索引(或普通属性)作为查询条件时报错(HugeGraph-1276)
    • init-store.sh在Centos平台上初始化失败或者卡住(HugeGraph-1255)

    测试

    内部修改

    • 将compareNumber方法搬移至common模块(HugeGraph-1208)
    • 修复HugeGraphServer无法在Ubuntu机器上启动的Bug(HugeGraph-1154)
    • 修复init-store.sh无法在bin目录下执行的BUG(HugeGraph-1223)
    • 修复HugeGraphServer启动过程中无法通过CTRL+C终止的BUG(HugeGraph-1223)
    • HugeGraphServer启动前检查端口是否被占用(HugeGraph-1223)
    • HugeGraphServer启动前检查系统JDK是否安装以及版本是否为1.8(HugeGraph-1223)
    • 给HugeConfig类增加getMap()方法(HugeGraph-1236)
    • 修改默认配置项,后端使用RocksDB,注释重要的配置项(HugeGraph-1240)
    • 重命名userData为userdata(HugeGraph-1249)
    • centos 4.3系统HugeGraphServer进程使用jps命令查不到
    • 增加配置项ALLOW_TRACE,允许设置是否返回exception stack trace(HugeGraph-81)

    Tools

    功能更新

    • 增加自动化部署工具以安装所有组件(HugeGraph-1267)
    • 增加clear的脚本,并拆分deploy和start-all(HugeGraph-1274)
    • 对hugegraph服务进行监控以提高可用性(HugeGraph-1266)
    • 增加backup/restore功能和命令(HugeGraph-1272)
    • 增加graphs API对应的命令(HugeGraph-1272)

    BUG修复

    Loader

    功能更新

    • 默认添加csv及json的示例(HugeGraph-1259)

    BUG修复

    8 - HugeGraph 0.5 Release Notes

    API & Java Client

    功能更新

    • VertexLabel与EdgeLabel增加bool参数enable_label_index表述是否构建label索引(HugeGraph-1085)
    • 增加RESTful API来支持高效shortest path,K-out和K-neighbor查询(HugeGraph-944)
    • 增加RESTful API支持按id列表批量查询顶点(HugeGraph-1153)
    • 支持迭代获取全部的顶点和边,使用分页实现(HugeGraph-1166)
    • 顶点id中包含 / % 等 URL 保留字符时通过 VertexAPI 查不出来(HugeGraph-1127)
    • 批量插入边时是否检查vertex的RESTful API参数从checkVertex改为check_vertex (HugeGraph-81)

    BUG修复

    • hasId()无法正确匹配LongId(HugeGraph-1083)

    Core

    功能更新

    • RocksDB支持常用配置项(HugeGraph-1068)
    • 支持插入、删除、更新等操作的限速(HugeGraph-1071)
    • 支持RocksDB导入sst文件方案(HugeGraph-1077)
    • 增加MySQL后端存储(HugeGraph-1091)
    • 增加Palo后端存储(HugeGraph-1092)
    • 增加开关:支持是否构建顶点/边的label index(HugeGraph-1085)
    • 支持API分页获取数据(HugeGraph-1105)
    • RocksDB配置的数据存放目录如果不存在则自动创建(HugeGraph-1135)
    • 增加高级遍历函数shortest path、K-neighbor,K-out和按id列表批量查询顶点(HugeGraph-944)
    • init-store.sh增加超时重试机制(HugeGraph-1150)
    • 将边表拆分两个表:OUT表、IN表(HugeGraph-1002)
    • 限制顶点ID最大长度为128字节(HugeGraph-1168)
    • Cassandra通过压缩数据(可配置snappy、lz4)进行优化(HugeGraph-428)
    • 支持IN和OR操作(HugeGraph-137)
    • 支持RocksDB并行写多个磁盘(HugeGraph-1177)
    • MySQL通过批量插入进行性能优化(HugeGraph-1188)

    BUG修复

    • Kryo系列化多线程时异常(HugeGraph-1066)
    • RocksDB索引内容中重复写了两次elem-id(HugeGraph-1094)
    • SnowflakeIdGenerator.instance在多线程环境下可能会初始化多个实例(HugeGraph-1095)
    • 如果查询边的顶点但顶点不存在时,异常信息不够明确(HugeGraph-1101)
    • RocksDB配置了多个图时,init-store失败(HugeGraph-1151)
    • 无法支持 Date 类型的属性值(HugeGraph-1165)
    • 创建了系统内部索引,但无法根据其进行搜索(HugeGraph-1167)
    • 拆表后根据label删除边时,edge-in表中的记录未被删除成功(HugeGraph-1182)

    测试

    • 增加配置项:vertex.force_id_string,跑 tinkerpop 测试时打开(HugeGraph-1069)

    内部修改

    • common库OptionChecker增加allowValues()函数用于枚举值(HugeGraph-1075)
    • 清理无用、版本老旧的依赖包,减少打包的压缩包的大小(HugeGraph-1078)
    • HugeConfig通过文件路径构造时,无法检查多次配置的配置项的值(HugeGraph-1079)
    • Server启动时可以支持智能分配最大内存(HugeGraph-1154)
    • 修复Mac OS因为不支持free命令导致无法启动server的问题(HugeGraph-1154)
    • 修改配置项的注册方式为字符串式,避免直接依赖Backend包(HugeGraph-1171)
    • 增加StoreDumper工具以查看后端存储的数据内容(HugeGraph-1172)
    • Jenkins把所有与内部服务器有关的构建机器信息都参数化传入(HugeGraph-1179)
    • 将RestClient移到common模块,令server和client都依赖common(HugeGraph-1183)
    • 增加配置项dump工具ConfDumper(HugeGraph-1193)

    9 - HugeGraph 0.4.4 Release Notes

    API & Java Client

    功能更新

    • HugeGraph-Server支持WebSocket,能用Gremlin-Console连接使用;并支持直接编写groovy脚本调用Core的代码(HugeGraph-977)
    • 适配Schema-id(HugeGraph-1038)

    BUG修复

    • hugegraph-0.3.3:删除vertex的属性,body中properties=null,返回500,空指针(HugeGraph-950)
    • hugegraph-0.3.3: graph.schema().getVertexLabel() 空指针(HugeGraph-955)
    • HugeGraph-Client 中顶点和边的属性集合不是线程安全的(HugeGraph-1013)
    • 批量操作的异常信息无法打印(HugeGraph-1013)
    • 异常message提示可读性太差,都是用propertyKey的id显示,对于用户来说无法立即识别(HugeGraph-1055)
    • 批量新增vertex实体,有一个body体为null,返回500,空指针(HugeGraph-1056)
    • 追加属性body体中只包含properties,功能出现回退,抛出异常The label of vertex can’t be null(HugeGraph-1057)
    • HugeGraph-Client适配:PropertyKey的DateType中Timestamp替换成Date(HugeGraph-1059)
    • 创建IndexLabel时baseValue为空会报出500错误(HugeGraph-1061)

    Core

    功能更新

    • 实现上层独立事务管理,并兼容tinkerpop事务规范(HugeGraph-918、HugeGraph-941)
    • 完善memory backend,可以通过API正确访问,且适配了tinkerpop事务(HugeGraph-41)
    • 增加RocksDB后端存储驱动框架(HugeGraph-929)
    • RocksDB数字索引range-query实现(HugeGraph-963)
    • 为所有的schema增加了id,并将各表原依赖name的列也换成id(HugeGraph-589)
    • 填充query key-value条件时,value的类型如果不匹配key定义的类型时需要转换为该类型(HugeGraph-964)
    • 统一各后端的offset、limit实现(HugeGraph-995)
    • 查询顶点、边时,Core支持迭代方式返回结果,而非一次性载入内存(HugeGraph-203)
    • memory backend支持range query(HugeGraph-967)
    • memory backend的secondary的支持方式从遍历改为IdQuery(HugeGraph-996)
    • 联合索引支持复杂的(只要逻辑上可以查都支持)多种索引组合查询(HugeGraph-903)
    • Schema中增加存储用户数据的域(map)(HugeGraph-902)
    • 统一ID的解析及系列化(包括API及Backend)(HugeGraph-965)
    • RocksDB没有keyspace概念,需要完善对多图实例的支持(HugeGraph-973)
    • 支持Cassandra设置连接用户名密码(HugeGraph-999)
    • Schema缓存支持缓存所有元数据(get-all-schema)(HugeGraph-1037)
    • 目前依然保持schema对外暴露name,暂不直接使用schema id(HugeGraph-1032)
    • 用户传入ID的策略的修改为支持String和Number(HugeGraph-956)

    BUG修复

    • 删除旧的前缀indexLabel时数据库中的schemaLabel对象还有残留(HugeGraph-969)
    • HugeConfig解析时共用了公共的Option,导致不同graph的配置项有覆盖(HugeGraph-984)
    • 数据库数据不兼容时,提示更加友好的异常信息(HugeGraph-998)
    • 支持Cassandra设置连接用户名密码(HugeGraph-999)
    • RocksDB deleteRange end溢出后触发RocksDB assert错误(HugeGraph-971)
    • 允许根据null值id进行查询顶点/边,返回结果为空集合(HugeGraph-1045)
    • 内存中存在部分更新数据未提交时,搜索结果不对(HugeGraph-1046)
    • g.V().hasLabel(XX)传入不存在的label时报错: Internal Server Error and Undefined property key: ‘~label’(HugeGraph-1048)
    • gremlin获取的的schema只剩下名称字符串(HugeGraph-1049)
    • 大量数据情况下无法进行count操作(HugeGraph-1051)
    • RocksDB持续插入6~8千万条边时卡住(HugeGraph-1053)
    • 整理属性类型的支持,并在BinarySerializer中使用二进制格式系列化属性值(HugeGraph-1062)

    测试

    • 增加tinkerpop的performance测试(HugeGraph-987)

    内部修改

    • HugeFactory打开同一个图(name相同者)时,共用HugeGraph对象即可(HugeGraph-983)
    • 规范索引类型命名secondary、range、search(HugeGraph-991)
    • 数据库数据不兼容时,提示更加友好的异常信息(HugeGraph-998)
    • IO部分的 gryo 和 graphson 的module分开(HugeGraph-1041)
    • 增加query性能测试到PerfExample中(HugeGraph-1044)
    • 关闭gremlin-server的metric日志(HugeGraph-1050)

    10 - HugeGraph 0.3.3 Release Notes

    API & Java Client

    功能更新

    • 为vertex-label和edge-label增加可空属性集合,允许在create和append时指定(HugeGraph-245)
    • 配合core的功能为用户提供tinkerpop variables RESTful API(HugeGraph-396)
    • 支持顶点/边属性的更新和删除(HugeGraph-894)
    • 支持顶点/边的条件查询(HugeGraph-919)

    BUG修复

    • HugeGraph-API接收的RequestBody为null或"“时抛出空指针异常(HugeGraph-795)
    • 为HugeGraph-API添加输入参数检查,避免抛出空指针异常(HugeGraph-796 ~ HugeGraph-798,HugeGraph-802,HugeGraph-808 ~ HugeGraph-814,HugeGraph-817,HugeGraph-823,HugeGraph-860)
    • 创建缺失outV-label 或者 inV-label的实体边,依然能够被创建成功,不符合需求(HugeGraph-835)
    • 创建vertex-label和edge-label时可以任意传入index-names(HugeGraph-837)
    • 创建index,base-type=“VERTEX”等值(期望VL、EL),返回500(HugeGraph-846)
    • 创建index,base-type和base-value不匹配,提示不友好(HugeGraph-848)
    • 删除已经不存在的两个实体之间的关系,schema返回204,顶点和边类型的则返回404(期望统一为404)(HugeGraph-853,HugeGraph-854)
    • 给vertex-label追加属性,缺失id-strategy,返回信息有误(HugeGraph-861)
    • 给edge-label追加属性,name缺失,提示信息有误(HugeGraph-862)
    • 给edge-label追加属性,source-label为“null”,提示信息有误(HugeGraph-863)
    • 查询时的StringId如果为空字符串应该抛出异常(HugeGraph-868)
    • 通Rest API创建两个顶点之间的边,在studio中通过g.V()则刚新创建的边则不显示,g.E()则能够显示新创建的边(HugeGraph-869)
    • HugeGraph-Server的内部错误500,不应该将stack trace返回给Client(HugeGraph-879)
    • addEdge传入空的id字符串时会抛出非法参数异常(HugeGraph-885)
    • HugeGraph-Client 的 Gremlin 查询结果在解析 Path 时,如果不包含Vertex/Edge会反序列化异常(HugeGraph-891)
    • 枚举HugeKeys的字符串变成小写字母加下划线,导致API序列化时字段名与类中变量名不一致,进而序列化失败(HugeGraph-896)
    • 增加边到不存在的顶点时返回404(期望400)(HugeGraph-922)

    Core

    功能更新

    • 支持对顶点/边属性(包括索引列)的更新操作(HugeGraph-369)
    • 索引field为空或者空字符串的支持(hugegraph-553和hugegraph-288)
    • vertex/edge的属性一致性保证推迟到实际要访问属性时(hugegraph-763)
    • 增加ScyllaDB后端驱动(HugeGraph-772)
    • 支持tinkerpop的hasKey、hasValue查询(HugeGraph-826)
    • 支持tinkerpop的variables功能(HugeGraph-396)
    • 以“~”为开头的为系统隐藏属性,用户不可以创建(HugeGraph-842)
    • 增加Backend Features以兼容不同后端的特性(HugeGraph-844)
    • 对mutation的update可能出现的操作不直接抛错,进行细化处理(HugeGraph-887)
    • 对append到vertex-label/edge-label的property检查,必须是nullable的(HugeGraph-890)
    • 对于按照id查询,当有的id不存在时,返回其余存在的对象,而非直接抛异常(HugeGraph-900)

    BUG修复

    • Vertex.edges(Direction.BOTH,…) assert error(HugeGraph-661)
    • 无法支持在addVertex函数中对同一property(single)多次赋值(HugeGraph-662)
    • 更新属性时不涉及更新的索引列会丢失(HugeGraph-801)
    • GraphTransaction中的ConditionQuery需要索引查询时,没有触发commit,导致查询失败(HugeGraph-805)
    • Cassandra不支持query offset,查询时limit=offset+limit取回所有记录后过滤(HugeGraph-851)
    • 多个插入操作加上一个删除操作,插入操作会覆盖删除操作(HugeGraph-857)
    • 查询时的StringId如果为空字符串应该抛出异常(HugeGraph-868)
    • 元数据schema方法只返回 hidden 信息(HugeGraph-912)

    测试

    • tinkerpop的structure和process测试使用不同的keyspace(HugeGraph-763)
    • 将tinkerpop测试和unit测试添加到流水线release-after-merge中(HugeGraph-763)
    • jenkins脚本分离各阶段子脚本,修改项目中的子脚本即可生效构建(HugeGraph-800)
    • 增加clear backends功能,在tinkerpop suite运行完成后清除后端(HugeGraph-852)
    • 增加BackendMutation的测试(HugeGraph-801)
    • 多线程操作图时可能抛出NoHostAvailableException异常(HugeGraph-883)

    内部修改

    • 调整HugeGraphServer和HugeGremlinServer启动时JVM的堆内存初始为256M,最大为2048M(HugeGraph-218)
    • 创建Cassandra Table时,使用schemaBuilder代替字符串拼接(hugegraph-773)
    • 运行测试用例时如果初始化图失败(比如数据库连接不上),clear()报错(HugeGraph-910)
    • Example抛异常 Need to specify a readable config file rather than…(HugeGraph-921)
    • HugeGraphServer和HugeGreminServer的缓存保持同步(HugeGraph-569)

    11 - HugeGraph 0.2 Release Notes

    API & Java Client

    功能更新

    0.2版实现了图数据库基本功能,提供如下功能:

    元数据(Schema)

    顶点类型(Vertex Label)

    • 创建顶点类型
    • 删除顶点类型
    • 查询顶点类型
    • 增加顶点类型的属性

    边类型(Edge Label)

    • 创建边类型
    • 删除边类型
    • 查询边类型
    • 增加边类型的属性

    属性(Property Key)

    • 创建属性
    • 删除属性
    • 查询属性

    索引(Index Label)

    • 创建索引
    • 删除索引
    • 查询索引

    元数据检查

    • 元数据依赖的其它元数据检查(如Vertex Label依赖Property Key)
    • 数据依赖的元数据检查(如Vertex依赖Vertex Label)

    图数据

    顶点(Vertex)

    • 增加顶点

    • 删除顶点

    • 增加顶点属性

    • 删除顶点属性(必须为非索引列)

    • 批量插入顶点

    • 查询

    • 批量查询

    • 顶点ID策略

      • 用户指定ID(字符串)
      • 用户指定某些属性组合作为ID(拼接为可见字符串)
      • 自动生成ID

    边(Edge)

    • 增加边
    • 增加多条同类型边到指定的两个节点(SortKey)
    • 删除边
    • 增加边属性
    • 删除边属性(必须为非索引列)
    • 批量插入边
    • 查询
    • 批量查询

    顶点/边属性

    • 属性类型支持

      • text
      • boolean
      • byte、blob
      • int、long
      • float、double
      • timestamp
      • uuid
    • 支持单值属性

    • 支持多值属性:List、Set(注意:非嵌套属性

    事务

    • 原子性级别保证(依赖后端
    • 自动提交事务
    • 手动提交事务
    • 并行事务

    索引

    索引类型

    • 二级索引
    • 范围索引(数字类型)

    索引操作

    • 为指定类型的顶点/边创建单列索引(不支持List或Set列创建索引)
    • 为指定类型的顶点/边创建复合索引(不支持List或Set列创建索引,复合索引为前缀索引)
    • 删除指定类型顶点/边的索引(部分或全部索引均可)
    • 重建指定类型顶点/边的索引(部分或全部索引均可)

    查询/遍历

    • 列出所有元数据、图数据(支持Limit,不支持分页)

    • 根据ID查询元数据、图数据

    • 根据指定属性的值查询图数据

    • 根据指定属性的值范围查询图数据(属性必须为数字类型)

    • 根据指定顶点/边类型、指定属性的值查询顶点/边

    • 根据指定顶点/边类型、指定属性的值范围查询顶点(属性必须为数字类型)

    • 根据顶点类型(Vertex Label)查询顶点

    • 根据边类型(Edge Label)查询边

    • 根据顶点查询边

      • 查询顶点的所有边
      • 查询顶点的指定方向边(出边、入边)
      • 查询顶点的指定方向、指定类型边
      • 查询两个顶点的同类型边中的某条边(SortKey)
    • 标准Gremlin遍历

    缓存

    可缓存内容

    • 元数据缓存
    • 顶点缓存

    缓存特性

    • LRU策略
    • 高性能并发访问
    • 支持超时过期机制

    接口(RESTful API)

    • 版本号接口
    • 图实例接口
    • 元数据接口
    • 图数据接口
    • Gremlin接口

    更多细节详见API文档

    后端支持

    支持Cassandra后端

    • 持久化
    • CQL3
    • 集群

    支持Memory后端(仅用于测试)

    • 非持久化
    • 部分特性无法支持(如:更新边属性、根据边类型查询边)

    其它

    支持配置项

    • 后端存储类型
    • 序列化方式
    • 缓存参数

    支持多图实例

    • 静态方式(增加多个图配置文件

    版本检查

    • 内部依赖包匹配版本检查
    • API匹配版本检查

    12 - HugeGraph 0.2.4 Release Notes

    API & Java Client

    功能更新

    元数据(Schema)相关

    BUG修复

    • Vertex Label为非primary-key id策略应该允许属性为空(HugeGraph-651)
    • Gremlin-Server 序列化的 EdgeLabel 仅有一个directed 属性,应该打印完整的schema描述(HugeGraph-680)
    • 创建IndexLabel时使用不存在的属性抛出空指针异常,应该抛非法参数异常(HugeGraph-682)
    • 创建schema如果已经存在并指定了ifNotExist时,结果应该返回原来的对象(HugeGraph-694)
    • 由于EdgeLabel的Frequency默认为null以及不允许修改特性,导致Append操作传递null值在API层反序列化失败(HugeGraph-729)
    • 增加对schema名称的正则检查配置项,默认不允许为全空白字符(HugeGraph-727)
    • 中文名的schema在前端显示为乱码(HugeGraph-711)

    图数据(Vertex、Edge)相关

    功能更新

    • DataType支持Array,并且List类型除了一个一个添加object,也需要支持直接赋值List对象(HugeGraph-719)
    • 自动生成的顶点id由十进制改为十六进制(字符串存储时)(HugeGraph-785)

    BUG修复

    • HugeGraph-API的VertexLabel/EdgeLabel API未提供eliminate接口(HugeGraph-614)
    • 增加非primary-key id策略的顶点时,如果属性为空无法插入到数据库中(HugeGraph-652)
    • 使用HugeGraph-Client的gremlin发送无返回值groovy请求时,由于gremlin-server将无返回值序列化为null,导致前端迭代结果集时出现空指针异常(HugeGraph-664)
    • RESTful API在没有找到对应id的vertex/edge时返回500(HugeGraph-734)
    • HugeElement/HugeProperty的equals()与tinkerpop不兼容(HugeGraph-653)
    • HugeEdgeProperty的property的equals函数与tinkerpop兼容 (HugeGraph-740)
    • HugeElement/HugeVertexProperty的hashcode函数与tinkerpop不兼容(HugeGraph-728)
    • HugeVertex/HugeEdge的toString函数与tinkerpop不兼容(HugeGraph-665)
    • 与tinkerpop的异常不兼容,包括IllegalArgumentsException和UnsupportedOperationException(HugeGraph-667)
    • 通过id无法找到element时,抛出的异常类型与tinkerpop不兼容(HugeGraph-689)
    • vertex.addEdge没有检查properties的数目是否为2的倍数(HugeGraph-716)
    • vertex.addEdge()时,assignId调用时机太晚,导致vertex的Set中有重复的edge(HugeGraph-666)
    • 查询时包含大于等于三层逻辑嵌套时,会抛出ClassCastException,现改成抛出非法参数异常(HugeGraph-481)
    • 边查询如果同时包含source-vertex/direction和property作为条件,查询结果错误(HugeGraph-749)
    • HugeGraph-Server 在运行时如果 cassandra 宕掉,插入或查询操作时会抛出DataStax的异常以及详细的调用栈(HugeGraph-771)
    • 删除不存在的 indexLabel 时会抛出异常,而删除其他三种元数据(不存在的)则不会(HugeGraph-782)
    • 当传给EdgeApi的源顶点或目标顶点的id非法时,会因为查询不到该顶点向客户端返回404状态码(HugeGraph-784)
    • 提供内部使用获取元数据的接口,使SchemaManager仅为外部使用,当获取不存在的schema时抛出NotFoundException异常(HugeGraph-743)
    • HugeGraph-Client 创建/添加/移除 元数据都应该返回来自服务端的结果(HugeGraph-760)
    • 创建HugeGraph-Client时如果输入了错误的主机会导致进程阻塞,无法响应(HugeGraph-718)

    查询、索引、缓存相关

    功能更新

    • 缓存更新更加高效的锁方案(HugeGraph-555)
    • 索引查询增加支持只有一个元素的IN语句(原来仅支持EQ)(HugeGraph-739)

    BUG修复

    • 防止请求数据量过大时服务本身hang住(HugeGraph-777)

    其它

    功能更新

    • 使Init-Store仅用于初始化数据库,清空后端由独立脚本实现(HugeGraph-650)

    BUG修复

    • 单元测试跑完后在测试机上遗留了临时的keyspace(HugeGraph-611)
    • Cassandra的info日志信息过多,将大部分修改为debug级别(HugeGraph-722)
    • EventHub.containsListener(String event)判断逻辑有遗漏(HugeGraph-732)
    • EventHub.listeners/unlisten(String event)当没有对应event的listener时会抛空指针异常(HugeGraph-733)

    测试

    Tinkerpop合规测试

    • 增加自定义ignore机制,规避掉暂时不需要加入持续集成的测试用例(HugeGraph-647)
    • 为TestGraph注册GraphSon和Kryo序列化器,实现 IdGenerator$StringId 的 graphson-v1、graphson-v2 和 Kryo的序列化与反序列化(HugeGraph-660)
    • 增加了可配置的测试用例过滤器,使得tinkerpop测试可以用在开发分支和发布分支的回归测试中
    • 将tinkerpop测试通过配置文件,加入到回归测试中

    单元测试

    • 增加Cache及Event的单元测试(HugeGraph-659)
    • HugeGraph-Client 增加API的测试(99个)
    • HugeGraph-Client 增加单元测试,包括RestResult反序列化的单测(12个)

    内部修改

    • 改进LOG变量方面代码(HugeGraph-623/HugeGraph-631)
    • License格式调整(HugeGraph-625)
    • 将序列化器中持有的graph抽离,要用到graph的函数通过传参数实现 (HugeGraph-750)
    +Click here to print.

    Return to the regular view of this page.

    CHANGELOGS

    1 - HugeGraph 0.12 Release Notes

    API & Client

    接口更新

    其它修改

    Core & Server

    功能更新

    BUG修复

    配置项修改:

    其它修改

    Loader

    Tools

    2 - HugeGraph 0.11 Release Notes

    API & Client

    功能更新

    内部修改

    Core

    功能更新

    BUG修复

    内部修改

    其它

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复

    内部修改

    3 - HugeGraph 0.10 Release Notes

    API & Client

    功能更新

    内部修改

    Core

    功能更新

    BUG修复

    内部修改

    其它

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复

    内部修改

    4 - HugeGraph 0.9 Release Notes

    API & Client

    功能更新

    BUG修复

    内部修改

    Core

    功能更新

    BUG修复

    内部修改

    其它

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复

    5 - HugeGraph 0.8 Release Notes

    API & Client

    功能更新

    BUG修复

    内部修改

    Core

    功能更新

    BUG修复

    内部修改

    其它

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复

    6 - HugeGraph 0.7 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    内部修改

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复

    Studio

    BUG修复

    7 - HugeGraph 0.6 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    测试

    内部修改

    Tools

    功能更新

    BUG修复

    Loader

    功能更新

    BUG修复

    8 - HugeGraph 0.5 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    测试

    内部修改

    9 - HugeGraph 0.4.4 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    测试

    内部修改

    10 - HugeGraph 0.3.3 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    测试

    内部修改

    11 - HugeGraph 0.2 Release Notes

    API & Java Client

    功能更新

    0.2版实现了图数据库基本功能,提供如下功能:

    元数据(Schema)

    顶点类型(Vertex Label)

    边类型(Edge Label)

    属性(Property Key)

    索引(Index Label)

    元数据检查

    图数据

    顶点(Vertex)

    边(Edge)

    顶点/边属性

    事务

    索引

    索引类型

    索引操作

    查询/遍历

    缓存

    可缓存内容

    缓存特性

    接口(RESTful API)

    更多细节详见API文档

    后端支持

    支持Cassandra后端

    支持Memory后端(仅用于测试)

    其它

    支持配置项

    支持多图实例

    版本检查

    12 - HugeGraph 0.2.4 Release Notes

    API & Java Client

    功能更新

    元数据(Schema)相关

    BUG修复

    图数据(Vertex、Edge)相关

    功能更新

    BUG修复

    查询、索引、缓存相关

    功能更新

    BUG修复

    其它

    功能更新

    BUG修复

    测试

    Tinkerpop合规测试

    单元测试

    内部修改

    diff --git a/cn/docs/changelog/hugegraph-0.10.4-release-notes/index.html b/cn/docs/changelog/hugegraph-0.10.4-release-notes/index.html index 68175b06e..0cdb31f3d 100644 --- a/cn/docs/changelog/hugegraph-0.10.4-release-notes/index.html +++ b/cn/docs/changelog/hugegraph-0.10.4-release-notes/index.html @@ -9,7 +9,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph 0.10 Release Notes

    API & Client

    功能更新

    内部修改

    Core

    功能更新

    BUG修复

    内部修改

    其它

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复

    内部修改


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    HugeGraph 0.10 Release Notes

    API & Client

    功能更新

    内部修改

    Core

    功能更新

    BUG修复

    内部修改

    其它

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复

    内部修改


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/changelog/hugegraph-0.11.2-release-notes/index.html b/cn/docs/changelog/hugegraph-0.11.2-release-notes/index.html index 8f4cae98d..2fb1e9336 100644 --- a/cn/docs/changelog/hugegraph-0.11.2-release-notes/index.html +++ b/cn/docs/changelog/hugegraph-0.11.2-release-notes/index.html @@ -9,7 +9,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph 0.11 Release Notes

    API & Client

    功能更新

    内部修改

    Core

    功能更新

    BUG修复

    内部修改

    其它

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复

    内部修改


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    HugeGraph 0.11 Release Notes

    API & Client

    功能更新

    内部修改

    Core

    功能更新

    BUG修复

    内部修改

    其它

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复

    内部修改


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/changelog/hugegraph-0.12.0-release-notes/index.html b/cn/docs/changelog/hugegraph-0.12.0-release-notes/index.html index 64ad487d4..099c0bb17 100644 --- a/cn/docs/changelog/hugegraph-0.12.0-release-notes/index.html +++ b/cn/docs/changelog/hugegraph-0.12.0-release-notes/index.html @@ -8,7 +8,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph 0.12 Release Notes

    API & Client

    接口更新

    其它修改

    Core & Server

    功能更新

    BUG修复

    配置项修改:

    其它修改

    Loader

    Tools


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    HugeGraph 0.12 Release Notes

    API & Client

    接口更新

    其它修改

    Core & Server

    功能更新

    BUG修复

    配置项修改:

    其它修改

    Loader

    Tools


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/changelog/hugegraph-0.2-release-notes/index.html b/cn/docs/changelog/hugegraph-0.2-release-notes/index.html index 1eadf625f..fe3f687da 100644 --- a/cn/docs/changelog/hugegraph-0.2-release-notes/index.html +++ b/cn/docs/changelog/hugegraph-0.2-release-notes/index.html @@ -56,7 +56,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph 0.2 Release Notes

    API & Java Client

    功能更新

    0.2版实现了图数据库基本功能,提供如下功能:

    元数据(Schema)

    顶点类型(Vertex Label)

    边类型(Edge Label)

    属性(Property Key)

    索引(Index Label)

    元数据检查

    图数据

    顶点(Vertex)

    边(Edge)

    顶点/边属性

    事务

    索引

    索引类型

    索引操作

    查询/遍历

    缓存

    可缓存内容

    缓存特性

    接口(RESTful API)

    更多细节详见API文档

    后端支持

    支持Cassandra后端

    支持Memory后端(仅用于测试)

    其它

    支持配置项

    支持多图实例

    版本检查


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    HugeGraph 0.2 Release Notes

    API & Java Client

    功能更新

    0.2版实现了图数据库基本功能,提供如下功能:

    元数据(Schema)

    顶点类型(Vertex Label)

    边类型(Edge Label)

    属性(Property Key)

    索引(Index Label)

    元数据检查

    图数据

    顶点(Vertex)

    边(Edge)

    顶点/边属性

    事务

    索引

    索引类型

    索引操作

    查询/遍历

    缓存

    可缓存内容

    缓存特性

    接口(RESTful API)

    更多细节详见API文档

    后端支持

    支持Cassandra后端

    支持Memory后端(仅用于测试)

    其它

    支持配置项

    支持多图实例

    版本检查


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/changelog/hugegraph-0.2.4-release-notes/index.html b/cn/docs/changelog/hugegraph-0.2.4-release-notes/index.html index 615954037..4f5511f94 100644 --- a/cn/docs/changelog/hugegraph-0.2.4-release-notes/index.html +++ b/cn/docs/changelog/hugegraph-0.2.4-release-notes/index.html @@ -10,7 +10,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph 0.2.4 Release Notes

    API & Java Client

    功能更新

    元数据(Schema)相关

    BUG修复

    图数据(Vertex、Edge)相关

    功能更新

    BUG修复

    查询、索引、缓存相关

    功能更新

    BUG修复

    其它

    功能更新

    BUG修复

    测试

    Tinkerpop合规测试

    单元测试

    内部修改


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    HugeGraph 0.2.4 Release Notes

    API & Java Client

    功能更新

    元数据(Schema)相关

    BUG修复

    图数据(Vertex、Edge)相关

    功能更新

    BUG修复

    查询、索引、缓存相关

    功能更新

    BUG修复

    其它

    功能更新

    BUG修复

    测试

    Tinkerpop合规测试

    单元测试

    内部修改


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/changelog/hugegraph-0.3.3-release-notes/index.html b/cn/docs/changelog/hugegraph-0.3.3-release-notes/index.html index 72440675f..e881649dc 100644 --- a/cn/docs/changelog/hugegraph-0.3.3-release-notes/index.html +++ b/cn/docs/changelog/hugegraph-0.3.3-release-notes/index.html @@ -8,7 +8,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph 0.3.3 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    测试

    内部修改


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    HugeGraph 0.3.3 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    测试

    内部修改


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/changelog/hugegraph-0.4.4-release-notes/index.html b/cn/docs/changelog/hugegraph-0.4.4-release-notes/index.html index 4e579600d..883e63190 100644 --- a/cn/docs/changelog/hugegraph-0.4.4-release-notes/index.html +++ b/cn/docs/changelog/hugegraph-0.4.4-release-notes/index.html @@ -10,7 +10,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph 0.4.4 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    测试

    内部修改


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    HugeGraph 0.4.4 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    测试

    内部修改


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/changelog/hugegraph-0.5.6-release-notes/index.html b/cn/docs/changelog/hugegraph-0.5.6-release-notes/index.html index a6e025f91..5b8cc255a 100644 --- a/cn/docs/changelog/hugegraph-0.5.6-release-notes/index.html +++ b/cn/docs/changelog/hugegraph-0.5.6-release-notes/index.html @@ -8,7 +8,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph 0.5 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    测试

    内部修改


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    HugeGraph 0.5 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    测试

    内部修改


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/changelog/hugegraph-0.6.1-release-notes/index.html b/cn/docs/changelog/hugegraph-0.6.1-release-notes/index.html index 7828b94b4..134fe031c 100644 --- a/cn/docs/changelog/hugegraph-0.6.1-release-notes/index.html +++ b/cn/docs/changelog/hugegraph-0.6.1-release-notes/index.html @@ -11,7 +11,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph 0.6 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    测试

    内部修改

    Tools

    功能更新

    BUG修复

    Loader

    功能更新

    BUG修复


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    HugeGraph 0.6 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    测试

    内部修改

    Tools

    功能更新

    BUG修复

    Loader

    功能更新

    BUG修复


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/changelog/hugegraph-0.7.4-release-notes/index.html b/cn/docs/changelog/hugegraph-0.7.4-release-notes/index.html index 813df5234..d4830fff9 100644 --- a/cn/docs/changelog/hugegraph-0.7.4-release-notes/index.html +++ b/cn/docs/changelog/hugegraph-0.7.4-release-notes/index.html @@ -13,7 +13,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph 0.7 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    内部修改

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复

    Studio

    BUG修复


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    HugeGraph 0.7 Release Notes

    API & Java Client

    功能更新

    BUG修复

    Core

    功能更新

    BUG修复

    内部修改

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复

    Studio

    BUG修复


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/changelog/hugegraph-0.8.0-release-notes/index.html b/cn/docs/changelog/hugegraph-0.8.0-release-notes/index.html index 9a350f99f..cbc885f8c 100644 --- a/cn/docs/changelog/hugegraph-0.8.0-release-notes/index.html +++ b/cn/docs/changelog/hugegraph-0.8.0-release-notes/index.html @@ -8,7 +8,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph 0.8 Release Notes

    API & Client

    功能更新

    BUG修复

    内部修改

    Core

    功能更新

    BUG修复

    内部修改

    其它

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    HugeGraph 0.8 Release Notes

    API & Client

    功能更新

    BUG修复

    内部修改

    Core

    功能更新

    BUG修复

    内部修改

    其它

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/changelog/hugegraph-0.9.2-release-notes/index.html b/cn/docs/changelog/hugegraph-0.9.2-release-notes/index.html index f8e022ce0..a7a1dd2c0 100644 --- a/cn/docs/changelog/hugegraph-0.9.2-release-notes/index.html +++ b/cn/docs/changelog/hugegraph-0.9.2-release-notes/index.html @@ -8,7 +8,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph 0.9 Release Notes

    API & Client

    功能更新

    BUG修复

    内部修改

    Core

    功能更新

    BUG修复

    内部修改

    其它

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    HugeGraph 0.9 Release Notes

    API & Client

    功能更新

    BUG修复

    内部修改

    Core

    功能更新

    BUG修复

    内部修改

    其它

    Loader

    功能更新

    BUG修复

    内部修改

    Tools

    功能更新

    BUG修复


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/changelog/index.html b/cn/docs/changelog/index.html index 89683cb16..2a612f35e 100644 --- a/cn/docs/changelog/index.html +++ b/cn/docs/changelog/index.html @@ -4,7 +4,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    CHANGELOGS


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    CHANGELOGS


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/cla/index.html b/cn/docs/cla/index.html index 1326cdb8f..3eb67f1a2 100644 --- a/cn/docs/cla/index.html +++ b/cn/docs/cla/index.html @@ -13,7 +13,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    Contributor Agreement

    Individual Contributor exclusive License Agreement

    (including the TRADITIONAL PATENT LICENSE OPTION)

    Thank you for your interest in contributing to HugeGraph’s all projects (“We” or “Us”).

    The purpose of this contributor agreement (“Agreement”) is to clarify and document the rights granted by contributors to Us. To make this document effective, please follow the comment of GitHub CLA-Assistant when submitting a new pull request.

    How to use this Contributor Agreement

    If You are an employee and have created the Contribution as part of your employment, You need to have Your employer approve this Agreement or sign the Entity version of this document. If You do not own the Copyright in the entire work of authorship, any other author of the Contribution should also sign this – in any event, please contact Us at hugegraph@googlegroups.com

    1. Definitions

    “You” means the individual Copyright owner who Submits a Contribution to Us.

    “Contribution” means any original work of authorship, including any original modifications or additions to an existing work of authorship, Submitted by You to Us, in which You own the Copyright.

    “Copyright” means all rights protecting works of authorship, including copyright, moral and neighboring rights, as appropriate, for the full term of their existence.

    “Material” means the software or documentation made available by Us to third parties. When this Agreement covers more than one software project, the Material means the software or documentation to which the Contribution was Submitted. After You Submit the Contribution, it may be included in the Material.

    “Submit” means any act by which a Contribution is transferred to Us by You by means of tangible or intangible media, including but not limited to electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, Us, but excluding any transfer that is conspicuously marked or otherwise designated in writing by You as “Not a Contribution.”

    “Documentation” means any non-software portion of a Contribution.

    2. License grant

    Subject to the terms and conditions of this Agreement, You hereby grant to Us a worldwide, royalty-free, Exclusive, perpetual and irrevocable (except as stated in Section 8.2) license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, under the Copyright covering the Contribution to use the Contribution by all means, including, but not limited to:

    2.2 Moral rights

    Moral Rights remain unaffected to the extent they are recognized and not waivable by applicable law. Notwithstanding, You may add your name to the attribution mechanism customary used in the Materials you Contribute to, such as the header of the source code files of Your Contribution, and We will respect this attribution when using Your Contribution.

    Upon such grant of rights to Us, We immediately grant to You a worldwide, royalty-free, non-exclusive, perpetual and irrevocable license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, under the Copyright covering the Contribution to use the Contribution by all means, including, but not limited to:

    This license back is limited to the Contribution and does not provide any rights to the Material.

    3. Patents

    3.1 Patent license

    Subject to the terms and conditions of this Agreement You hereby grant to Us and to recipients of Materials distributed by Us a worldwide, royalty-free, non-exclusive, perpetual and irrevocable (except as stated in Section 3.2) patent license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, to make, have made, use, sell, offer for sale, import and otherwise transfer the Contribution and the Contribution in combination with any Material (and portions of such combination). This license applies to all patents owned or controlled by You, whether already acquired or hereafter acquired, that would be infringed by making, having made, using, selling, offering for sale, importing or otherwise transferring of Your Contribution(s) alone or by combination of Your Contribution(s) with any Material.

    3.2 Revocation of patent license

    You reserve the right to revoke the patent license stated in section 3.1 if We make any infringement claim that is targeted at your Contribution and not asserted for a Defensive Purpose. An assertion of claims of the Patents shall be considered for a “Defensive Purpose” if the claims are asserted against an entity that has filed, maintained, threatened, or voluntarily participated in a patent infringement lawsuit against Us or any of Our licensees.

    4. License obligations by Us

    We agree to (sub)license the Contribution or any Materials containing, based on or derived from your Contribution under the terms of any licenses the Free Software Foundation classifies as Free Software License and which are approved by the Open Source Initiative as Open Source licenses.

    More specifically and in strict accordance with the above paragraph, we agree to (sub)license the Contribution or any Materials containing, based on or derived from the Contribution only in accordance with our licensing policy available at: http://www.apache.org/licenses/LICENSE-2.0.

    In addition, We may use the following licenses for Documentation in the Contribution: GFDL-1.2 (including any right to adopt any future version of a license).

    We agree to license patents owned or controlled by You only to the extent necessary to (sub)license Your Contribution(s) and the combination of Your Contribution(s) with the Material under the terms of any licenses the Free Software Foundation classifies as Free Software licenses and which are approved by the Open Source Initiative as Open Source licenses..

    5. Disclaimer

    THE CONTRIBUTION IS PROVIDED “AS IS”. MORE PARTICULARLY, ALL EXPRESS OR IMPLIED WARRANTIES INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTY OF SATISFACTORY QUALITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT ARE EXPRESSLY DISCLAIMED BY YOU TO US AND BY US TO YOU. TO THE EXTENT THAT ANY SUCH WARRANTIES CANNOT BE DISCLAIMED, SUCH WARRANTY IS LIMITED IN DURATION AND EXTENT TO THE MINIMUM PERIOD AND EXTENT PERMITTED BY LAW.

    6. Consequential damage waiver

    TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT WILL YOU OR WE BE LIABLE FOR ANY LOSS OF PROFITS, LOSS OF ANTICIPATED SAVINGS, LOSS OF DATA, INDIRECT, SPECIAL, INCIDENTAL, CONSEQUENTIAL AND EXEMPLARY DAMAGES ARISING OUT OF THIS AGREEMENT REGARDLESS OF THE LEGAL OR EQUITABLE THEORY (CONTRACT, TORT OR OTHERWISE) UPON WHICH THE CLAIM IS BASED.

    7. Approximation of disclaimer and damage waiver

    IF THE DISCLAIMER AND DAMAGE WAIVER MENTIONED IN SECTION 5. AND SECTION 6. CANNOT BE GIVEN LEGAL EFFECT UNDER APPLICABLE LOCAL LAW, REVIEWING COURTS SHALL APPLY LOCAL LAW THAT MOST CLOSELY APPROXIMATES AN ABSOLUTE WAIVER OF ALL CIVIL OR CONTRACTUAL LIABILITY IN CONNECTION WITH THE CONTRIBUTION.

    8. Term

    8.1 This Agreement shall come into effect upon Your acceptance of the terms and conditions.

    8.2 This Agreement shall apply for the term of the copyright and patents licensed here. However, You shall have the right to terminate the Agreement if We do not fulfill the obligations as set forth in Section 4. Such termination must be made in writing.

    8.3 In the event of a termination of this Agreement Sections 5, 6, 7, 8 and 9 shall survive such termination and shall remain in full force thereafter. For the avoidance of doubt, Free and Open Source Software (sub)licenses that have already been granted for Contributions at the date of the termination shall remain in full force after the termination of this Agreement.

    9 Miscellaneous

    9.1 This Agreement and all disputes, claims, actions, suits or other proceedings arising out of this agreement or relating in any way to it shall be governed by the laws of China excluding its private international law provisions.

    9.2 This Agreement sets out the entire agreement between You and Us for Your Contributions to Us and overrides all other agreements or understandings.

    9.3 In case of Your death, this agreement shall continue with Your heirs. In case of more than one heir, all heirs must exercise their rights through a commonly authorized person.

    9.4 If any provision of this Agreement is found void and unenforceable, such provision will be replaced to the extent possible with a provision that comes closest to the meaning of the original provision and that is enforceable. The terms and conditions set forth in this Agreement shall apply notwithstanding any failure of essential purpose of this Agreement or any limited remedy to the maximum extent possible under law.

    9.5 You agree to notify Us of any facts or circumstances of which you become aware that would make this Agreement inaccurate in any respect.


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    Contributor Agreement

    Individual Contributor exclusive License Agreement

    (including the TRADITIONAL PATENT LICENSE OPTION)

    Thank you for your interest in contributing to HugeGraph’s all projects (“We” or “Us”).

    The purpose of this contributor agreement (“Agreement”) is to clarify and document the rights granted by contributors to Us. To make this document effective, please follow the comment of GitHub CLA-Assistant when submitting a new pull request.

    How to use this Contributor Agreement

    If You are an employee and have created the Contribution as part of your employment, You need to have Your employer approve this Agreement or sign the Entity version of this document. If You do not own the Copyright in the entire work of authorship, any other author of the Contribution should also sign this – in any event, please contact Us at hugegraph@googlegroups.com

    1. Definitions

    “You” means the individual Copyright owner who Submits a Contribution to Us.

    “Contribution” means any original work of authorship, including any original modifications or additions to an existing work of authorship, Submitted by You to Us, in which You own the Copyright.

    “Copyright” means all rights protecting works of authorship, including copyright, moral and neighboring rights, as appropriate, for the full term of their existence.

    “Material” means the software or documentation made available by Us to third parties. When this Agreement covers more than one software project, the Material means the software or documentation to which the Contribution was Submitted. After You Submit the Contribution, it may be included in the Material.

    “Submit” means any act by which a Contribution is transferred to Us by You by means of tangible or intangible media, including but not limited to electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, Us, but excluding any transfer that is conspicuously marked or otherwise designated in writing by You as “Not a Contribution.”

    “Documentation” means any non-software portion of a Contribution.

    2. License grant

    Subject to the terms and conditions of this Agreement, You hereby grant to Us a worldwide, royalty-free, Exclusive, perpetual and irrevocable (except as stated in Section 8.2) license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, under the Copyright covering the Contribution to use the Contribution by all means, including, but not limited to:

    2.2 Moral rights

    Moral Rights remain unaffected to the extent they are recognized and not waivable by applicable law. Notwithstanding, You may add your name to the attribution mechanism customary used in the Materials you Contribute to, such as the header of the source code files of Your Contribution, and We will respect this attribution when using Your Contribution.

    Upon such grant of rights to Us, We immediately grant to You a worldwide, royalty-free, non-exclusive, perpetual and irrevocable license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, under the Copyright covering the Contribution to use the Contribution by all means, including, but not limited to:

    This license back is limited to the Contribution and does not provide any rights to the Material.

    3. Patents

    3.1 Patent license

    Subject to the terms and conditions of this Agreement You hereby grant to Us and to recipients of Materials distributed by Us a worldwide, royalty-free, non-exclusive, perpetual and irrevocable (except as stated in Section 3.2) patent license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, to make, have made, use, sell, offer for sale, import and otherwise transfer the Contribution and the Contribution in combination with any Material (and portions of such combination). This license applies to all patents owned or controlled by You, whether already acquired or hereafter acquired, that would be infringed by making, having made, using, selling, offering for sale, importing or otherwise transferring of Your Contribution(s) alone or by combination of Your Contribution(s) with any Material.

    3.2 Revocation of patent license

    You reserve the right to revoke the patent license stated in section 3.1 if We make any infringement claim that is targeted at your Contribution and not asserted for a Defensive Purpose. An assertion of claims of the Patents shall be considered for a “Defensive Purpose” if the claims are asserted against an entity that has filed, maintained, threatened, or voluntarily participated in a patent infringement lawsuit against Us or any of Our licensees.

    4. License obligations by Us

    We agree to (sub)license the Contribution or any Materials containing, based on or derived from your Contribution under the terms of any licenses the Free Software Foundation classifies as Free Software License and which are approved by the Open Source Initiative as Open Source licenses.

    More specifically and in strict accordance with the above paragraph, we agree to (sub)license the Contribution or any Materials containing, based on or derived from the Contribution only in accordance with our licensing policy available at: http://www.apache.org/licenses/LICENSE-2.0.

    In addition, We may use the following licenses for Documentation in the Contribution: GFDL-1.2 (including any right to adopt any future version of a license).

    We agree to license patents owned or controlled by You only to the extent necessary to (sub)license Your Contribution(s) and the combination of Your Contribution(s) with the Material under the terms of any licenses the Free Software Foundation classifies as Free Software licenses and which are approved by the Open Source Initiative as Open Source licenses..

    5. Disclaimer

    THE CONTRIBUTION IS PROVIDED “AS IS”. MORE PARTICULARLY, ALL EXPRESS OR IMPLIED WARRANTIES INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTY OF SATISFACTORY QUALITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT ARE EXPRESSLY DISCLAIMED BY YOU TO US AND BY US TO YOU. TO THE EXTENT THAT ANY SUCH WARRANTIES CANNOT BE DISCLAIMED, SUCH WARRANTY IS LIMITED IN DURATION AND EXTENT TO THE MINIMUM PERIOD AND EXTENT PERMITTED BY LAW.

    6. Consequential damage waiver

    TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT WILL YOU OR WE BE LIABLE FOR ANY LOSS OF PROFITS, LOSS OF ANTICIPATED SAVINGS, LOSS OF DATA, INDIRECT, SPECIAL, INCIDENTAL, CONSEQUENTIAL AND EXEMPLARY DAMAGES ARISING OUT OF THIS AGREEMENT REGARDLESS OF THE LEGAL OR EQUITABLE THEORY (CONTRACT, TORT OR OTHERWISE) UPON WHICH THE CLAIM IS BASED.

    7. Approximation of disclaimer and damage waiver

    IF THE DISCLAIMER AND DAMAGE WAIVER MENTIONED IN SECTION 5. AND SECTION 6. CANNOT BE GIVEN LEGAL EFFECT UNDER APPLICABLE LOCAL LAW, REVIEWING COURTS SHALL APPLY LOCAL LAW THAT MOST CLOSELY APPROXIMATES AN ABSOLUTE WAIVER OF ALL CIVIL OR CONTRACTUAL LIABILITY IN CONNECTION WITH THE CONTRIBUTION.

    8. Term

    8.1 This Agreement shall come into effect upon Your acceptance of the terms and conditions.

    8.2 This Agreement shall apply for the term of the copyright and patents licensed here. However, You shall have the right to terminate the Agreement if We do not fulfill the obligations as set forth in Section 4. Such termination must be made in writing.

    8.3 In the event of a termination of this Agreement Sections 5, 6, 7, 8 and 9 shall survive such termination and shall remain in full force thereafter. For the avoidance of doubt, Free and Open Source Software (sub)licenses that have already been granted for Contributions at the date of the termination shall remain in full force after the termination of this Agreement.

    9 Miscellaneous

    9.1 This Agreement and all disputes, claims, actions, suits or other proceedings arising out of this agreement or relating in any way to it shall be governed by the laws of China excluding its private international law provisions.

    9.2 This Agreement sets out the entire agreement between You and Us for Your Contributions to Us and overrides all other agreements or understandings.

    9.3 In case of Your death, this agreement shall continue with Your heirs. In case of more than one heir, all heirs must exercise their rights through a commonly authorized person.

    9.4 If any provision of this Agreement is found void and unenforceable, such provision will be replaced to the extent possible with a provision that comes closest to the meaning of the original provision and that is enforceable. The terms and conditions set forth in this Agreement shall apply notwithstanding any failure of essential purpose of this Agreement or any limited remedy to the maximum extent possible under law.

    9.5 You agree to notify Us of any facts or circumstances of which you become aware that would make this Agreement inaccurate in any respect.


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/clients/_print/index.html b/cn/docs/clients/_print/index.html index 447afeee1..d72aa36f2 100644 --- a/cn/docs/clients/_print/index.html +++ b/cn/docs/clients/_print/index.html @@ -4557,7 +4557,7 @@ gremlin> :> @script ==>6 -

    更多关于gremlin-console的使用,请参考Tinkerpop官网

    +

    更多关于gremlin-console的使用,请参考Tinkerpop官网

    diff --git a/cn/docs/clients/gremlin-console/index.html b/cn/docs/clients/gremlin-console/index.html index 993a7a892..d47f6132c 100644 --- a/cn/docs/clients/gremlin-console/index.html +++ b/cn/docs/clients/gremlin-console/index.html @@ -248,7 +248,7 @@ gremlin> :> @script ==>6 -

    更多关于gremlin-console的使用,请参考Tinkerpop官网


    Last modified April 17, 2022: rebuild doc (ef36544)
    +

    更多关于gremlin-console的使用,请参考Tinkerpop官网


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/clients/hugegraph-client/index.html b/cn/docs/clients/hugegraph-client/index.html index 860232cf6..d2bec6eb5 100644 --- a/cn/docs/clients/hugegraph-client/index.html +++ b/cn/docs/clients/hugegraph-client/index.html @@ -112,7 +112,7 @@

    3 图数据

    3.1 Vertex

    顶点是构成图的最基本元素,一个图中可以有非常多的顶点。下面给出一个添加顶点的例子:

    Vertex marko = graph.addVertex(T.label, "person", "name", "marko", "age", 29);
     Vertex lop = graph.addVertex(T.label, "software", "name", "lop", "lang", "java", "price", 328);
     

    3.2 Edge

    有了点,还需要边才能构成完整的图。下面给出一个添加边的例子:

    Edge knows1 = marko.addEdge("knows", vadas, "city", "Beijing");
    -

    注意:当frequency为multiple时必须要设置sortKeys对应属性类型的值。

    4 简单示例

    简单示例见HugeGraph-Client


    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    +

    注意:当frequency为multiple时必须要设置sortKeys对应属性类型的值。

    4 简单示例

    简单示例见HugeGraph-Client


    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    diff --git a/cn/docs/clients/index.html b/cn/docs/clients/index.html index f6af6c753..920053ee4 100644 --- a/cn/docs/clients/index.html +++ b/cn/docs/clients/index.html @@ -4,7 +4,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    API


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    API


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/clients/restful-api/_print/index.html b/cn/docs/clients/restful-api/_print/index.html index 33171b4c8..34802eb50 100644 --- a/cn/docs/clients/restful-api/_print/index.html +++ b/cn/docs/clients/restful-api/_print/index.html @@ -4251,7 +4251,7 @@ "api": "0.13.2.0" } } - + diff --git a/cn/docs/clients/restful-api/auth/index.html b/cn/docs/clients/restful-api/auth/index.html index 3ddd706a4..5e8b4e37f 100644 --- a/cn/docs/clients/restful-api/auth/index.html +++ b/cn/docs/clients/restful-api/auth/index.html @@ -406,7 +406,7 @@ "group": "-69:all", "target": "-77:all" } -
    Last modified April 17, 2022: rebuild doc (ef36544)
    +
    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/clients/restful-api/edge/index.html b/cn/docs/clients/restful-api/edge/index.html index 945252a3d..0337a4f11 100644 --- a/cn/docs/clients/restful-api/edge/index.html +++ b/cn/docs/clients/restful-api/edge/index.html @@ -354,7 +354,7 @@
    Response Status
    204
     

    根据Label+Id删除边

    通过指定Label参数和Id来删除边时,一般来说其性能比仅根据Id删除会更好。

    Method & Url
    DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?label=person
     
    Response Status
    204
    -

    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    +
    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    diff --git a/cn/docs/clients/restful-api/edgelabel/index.html b/cn/docs/clients/restful-api/edgelabel/index.html index c59d52ddc..c206d5897 100644 --- a/cn/docs/clients/restful-api/edgelabel/index.html +++ b/cn/docs/clients/restful-api/edgelabel/index.html @@ -194,7 +194,7 @@
    Response Body
    {
         "task_id": 1
     }
    -

    注:

    可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API


    Last modified April 17, 2022: rebuild doc (ef36544)
    +

    注:

    可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/clients/restful-api/graphs/index.html b/cn/docs/clients/restful-api/graphs/index.html index 23455a03e..fad65596b 100644 --- a/cn/docs/clients/restful-api/graphs/index.html +++ b/cn/docs/clients/restful-api/graphs/index.html @@ -118,7 +118,7 @@ "local": "OK" } } -
    Last modified May 27, 2022: divide create graph into clone and create (665739b)
    +
    Last modified May 27, 2022: divide create graph into clone and create (665739b)
    diff --git a/cn/docs/clients/restful-api/gremlin/index.html b/cn/docs/clients/restful-api/gremlin/index.html index 201de7658..cefcc4d76 100644 --- a/cn/docs/clients/restful-api/gremlin/index.html +++ b/cn/docs/clients/restful-api/gremlin/index.html @@ -141,7 +141,7 @@
    Response Body
    {
     	"task_id": 2
     }
    -

    注:

    可以通过GET http://localhost:8080/graphs/hugegraph/tasks/2(其中"2"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API


    Last modified April 17, 2022: rebuild doc (ef36544)
    +

    注:

    可以通过GET http://localhost:8080/graphs/hugegraph/tasks/2(其中"2"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/clients/restful-api/index.html b/cn/docs/clients/restful-api/index.html index dbe9b05f7..a0fbe38fc 100644 --- a/cn/docs/clients/restful-api/index.html +++ b/cn/docs/clients/restful-api/index.html @@ -7,7 +7,7 @@ Create documentation issue Create project issue Print entire section

    HugeGraph RESTful API

    HugeGraph-Server通过HugeGraph-API基于HTTP协议为Client提供操作图的接口,主要包括元数据和 -图数据的增删改查,遍历算法,变量,图操作及其他操作。


    Last modified April 17, 2022: rebuild doc (ef36544)
    +图数据的增删改查,遍历算法,变量,图操作及其他操作。


    Schema API

    PropertyKey API

    VertexLabel API

    EdgeLabel API

    IndexLabel API

    Rebuild API

    Vertex API

    Edge API

    Traverser API

    Rank API

    Variable API

    Graphs API

    Task API

    Gremlin API

    Authentication API

    Other API


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/clients/restful-api/indexlabel/index.html b/cn/docs/clients/restful-api/indexlabel/index.html index 74b60fefc..b32bd6be8 100644 --- a/cn/docs/clients/restful-api/indexlabel/index.html +++ b/cn/docs/clients/restful-api/indexlabel/index.html @@ -99,7 +99,7 @@
    Response Body
    {
         "task_id": 1
     }
    -

    注:

    可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API


    Last modified April 17, 2022: rebuild doc (ef36544)
    +

    注:

    可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/clients/restful-api/other/index.html b/cn/docs/clients/restful-api/other/index.html index 3771f394d..12b16474b 100644 --- a/cn/docs/clients/restful-api/other/index.html +++ b/cn/docs/clients/restful-api/other/index.html @@ -22,7 +22,7 @@ "api": "0.13.2.0" } } -
    Last modified April 17, 2022: rebuild doc (ef36544)
    +
    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/clients/restful-api/propertykey/index.html b/cn/docs/clients/restful-api/propertykey/index.html index f77beff1b..dedbd6bc7 100644 --- a/cn/docs/clients/restful-api/propertykey/index.html +++ b/cn/docs/clients/restful-api/propertykey/index.html @@ -149,7 +149,7 @@
    Response Body
    {
         "task_id" : 0
     }
    -

    Last modified May 12, 2022: fix: bad request body simple in propertykey.md (1c933ca)
    +
    Last modified May 12, 2022: fix: bad request body simple in propertykey.md (1c933ca)
    diff --git a/cn/docs/clients/restful-api/rank/index.html b/cn/docs/clients/restful-api/rank/index.html index 896604b60..cb2b885a3 100644 --- a/cn/docs/clients/restful-api/rank/index.html +++ b/cn/docs/clients/restful-api/rank/index.html @@ -260,7 +260,7 @@ } ] } -
    4.2.2.3 适用场景

    为给定的起点在不同的层中找到最应该推荐的顶点。


    Last modified April 17, 2022: rebuild doc (ef36544)
    +
    4.2.2.3 适用场景

    为给定的起点在不同的层中找到最应该推荐的顶点。


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/clients/restful-api/rebuild/index.html b/cn/docs/clients/restful-api/rebuild/index.html index 3b7ebeb5f..121974a90 100644 --- a/cn/docs/clients/restful-api/rebuild/index.html +++ b/cn/docs/clients/restful-api/rebuild/index.html @@ -39,7 +39,7 @@
    Response Body
    {
         "task_id": 3
     }
    -

    注:

    可以通过GET http://localhost:8080/graphs/hugegraph/tasks/3(其中"3"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API


    Last modified April 17, 2022: rebuild doc (ef36544)
    +

    注:

    可以通过GET http://localhost:8080/graphs/hugegraph/tasks/3(其中"3"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/clients/restful-api/schema/index.html b/cn/docs/clients/restful-api/schema/index.html index 9cdc34c30..46b1021bc 100644 --- a/cn/docs/clients/restful-api/schema/index.html +++ b/cn/docs/clients/restful-api/schema/index.html @@ -308,7 +308,7 @@ } ] } -
    Last modified April 17, 2022: rebuild doc (ef36544)
    +
    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/clients/restful-api/task/index.html b/cn/docs/clients/restful-api/task/index.html index 93cb07b04..68e651533 100644 --- a/cn/docs/clients/restful-api/task/index.html +++ b/cn/docs/clients/restful-api/task/index.html @@ -60,7 +60,7 @@
    Response Body
    {
         "cancelled": true
     }
    -

    此时查询 label 为 man 的顶点数目,一定是小于 10 的。


    Last modified April 17, 2022: rebuild doc (ef36544)
    +

    此时查询 label 为 man 的顶点数目,一定是小于 10 的。


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/clients/restful-api/traverser/index.html b/cn/docs/clients/restful-api/traverser/index.html index 34515902b..6fe9b8cf9 100644 --- a/cn/docs/clients/restful-api/traverser/index.html +++ b/cn/docs/clients/restful-api/traverser/index.html @@ -1720,7 +1720,7 @@ } ] } -
    3.2.23.4 适用场景

    Last modified April 17, 2022: rebuild doc (ef36544)
    +
    3.2.23.4 适用场景

    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/clients/restful-api/variable/index.html b/cn/docs/clients/restful-api/variable/index.html index bd8d66225..819c11263 100644 --- a/cn/docs/clients/restful-api/variable/index.html +++ b/cn/docs/clients/restful-api/variable/index.html @@ -31,7 +31,7 @@ }

    5.1.4 删除某个键值对

    Method & Url
    DELETE http://localhost:8080/graphs/hugegraph/variables/name
     
    Response Status
    204
    -

    Last modified April 17, 2022: rebuild doc (ef36544)
    +
    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/clients/restful-api/vertex/index.html b/cn/docs/clients/restful-api/vertex/index.html index 004e975bc..e626ffb68 100644 --- a/cn/docs/clients/restful-api/vertex/index.html +++ b/cn/docs/clients/restful-api/vertex/index.html @@ -422,7 +422,7 @@
    Response Status
    204
     

    根据Label+Id删除顶点

    通过指定Label参数和Id来删除顶点时,一般来说其性能比仅根据Id删除会更好。

    Method & Url
    DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"?label=person
     
    Response Status
    204
    -

    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    +
    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    diff --git a/cn/docs/clients/restful-api/vertexlabel/index.html b/cn/docs/clients/restful-api/vertexlabel/index.html index 6cf6d5aa4..7d142d66f 100644 --- a/cn/docs/clients/restful-api/vertexlabel/index.html +++ b/cn/docs/clients/restful-api/vertexlabel/index.html @@ -190,7 +190,7 @@
    Response Body
    {
         "task_id": 1
     }
    -

    注:

    可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API


    Last modified April 17, 2022: rebuild doc (ef36544)
    +

    注:

    可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/config/_print/index.html b/cn/docs/config/_print/index.html index 732b2252c..13c43c73f 100644 --- a/cn/docs/config/_print/index.html +++ b/cn/docs/config/_print/index.html @@ -265,7 +265,7 @@ 国家代码:CN
    1. 根据服务端私钥,导出服务端证书
    keytool -export -alias serverkey -keystore server.keystore -file server.crt
     

    server.crt 就是服务端的证书

    客户端

    keytool -import -alias serverkey -file server.crt -keystore client.truststore
    -

    client.truststore 是给客户端⽤的,其中保存着受信任的证书

    5 - HugeGraph-Computer 配置

    Computer Config Options

    config optiondefault valuedescription
    algorithm.message_classorg.apache.hugegraph.computer.core.config.NullThe class of message passed when compute vertex.
    algorithm.params_classorg.apache.hugegraph.computer.core.config.NullThe class used to transfer algorithms’ parameters before algorithm been run.
    algorithm.result_classorg.apache.hugegraph.computer.core.config.NullThe class of vertex’s value, the instance is used to store computation result for the vertex.
    allocator.max_vertices_per_thread10000Maximum number of vertices per thread processed in each memory allocator
    bsp.etcd_endpointshttp://localhost:2379The end points to access etcd.
    bsp.log_interval30000The log interval(in ms) to print the log while waiting bsp event.
    bsp.max_super_step10The max super step of the algorithm.
    bsp.register_timeout300000The max timeout to wait for master and works to register.
    bsp.wait_master_timeout86400000The max timeout(in ms) to wait for master bsp event.
    bsp.wait_workers_timeout86400000The max timeout to wait for workers bsp event.
    hgkv.max_data_block_size65536The max byte size of hgkv-file data block.
    hgkv.max_file_size2147483648The max number of bytes in each hgkv-file.
    hgkv.max_merge_files10The max number of files to merge at one time.
    hgkv.temp_file_dir/tmp/hgkvThis folder is used to store temporary files, temporary files will be generated during the file merging process.
    hugegraph.namehugegraphThe graph name to load data and write results back.
    hugegraph.urlhttp://127.0.0.1:8080The hugegraph url to load data and write results back.
    input.edge_directionOUTThe data of the edge in which direction is loaded, when the value is BOTH, the edges in both OUT and IN direction will be loaded.
    input.edge_freqMULTIPLEThe frequency of edges can exist between a pair of vertices, allowed values: [SINGLE, SINGLE_PER_LABEL, MULTIPLE]. SINGLE means that only one edge can exist between a pair of vertices, use sourceId + targetId to identify it; SINGLE_PER_LABEL means that each edge label can exist one edge between a pair of vertices, use sourceId + edgelabel + targetId to identify it; MULTIPLE means that many edge can exist between a pair of vertices, use sourceId + edgelabel + sortValues + targetId to identify it.
    input.filter_classorg.apache.hugegraph.computer.core.input.filter.DefaultInputFilterThe class to create input-filter object, input-filter is used to Filter vertex edges according to user needs.
    input.loader_schema_pathThe schema path of loader input, only takes effect when the input.source_type=loader is enabled
    input.loader_struct_pathThe struct path of loader input, only takes effect when the input.source_type=loader is enabled
    input.max_edges_in_one_vertex200The maximum number of adjacent edges allowed to be attached to a vertex, the adjacent edges will be stored and transferred together as a batch unit.
    input.source_typehugegraph-serverThe source type to load input data, allowed values: [‘hugegraph-server’, ‘hugegraph-loader’], the ‘hugegraph-loader’ means use hugegraph-loader load data from HDFS or file, if use ‘hugegraph-loader’ load data then please config ‘input.loader_struct_path’ and ‘input.loader_schema_path’.
    input.split_fetch_timeout300The timeout in seconds to fetch input splits
    input.split_max_splits10000000The maximum number of input splits
    input.split_page_size500The page size for streamed load input split data
    input.split_size1048576The input split size in bytes
    job.idlocal_0001The job id on Yarn cluster or K8s cluster.
    job.partitions_count1The partitions count for computing one graph algorithm job.
    job.partitions_thread_nums4The number of threads for partition parallel compute.
    job.workers_count1The workers count for computing one graph algorithm job.
    master.computation_classorg.apache.hugegraph.computer.core.master.DefaultMasterComputationMaster-computation is computation that can determine whether to continue next superstep. It runs at the end of each superstep on master.
    output.batch_size500The batch size of output
    output.batch_threads1The threads number used to batch output
    output.hdfs_core_site_pathThe hdfs core site path.
    output.hdfs_delimiter,The delimiter of hdfs output.
    output.hdfs_kerberos_enablefalseIs Kerberos authentication enabled for Hdfs.
    output.hdfs_kerberos_keytabThe Hdfs’s key tab file for kerberos authentication.
    output.hdfs_kerberos_principalThe Hdfs’s principal for kerberos authentication.
    output.hdfs_krb5_conf/etc/krb5.confKerberos configuration file.
    output.hdfs_merge_partitionstrueWhether merge output files of multiple partitions.
    output.hdfs_path_prefix/hugegraph-computer/resultsThe directory of hdfs output result.
    output.hdfs_replication3The replication number of hdfs.
    output.hdfs_site_pathThe hdfs site path.
    output.hdfs_urlhdfs://127.0.0.1:9000The hdfs url of output.
    output.hdfs_userhadoopThe hdfs user of output.
    output.output_classorg.apache.hugegraph.computer.core.output.LogOutputThe class to output the computation result of each vertex. Be called after iteration computation.
    output.result_namevalueThe value is assigned dynamically by #name() of instance created by WORKER_COMPUTATION_CLASS.
    output.result_write_typeOLAP_COMMONThe result write-type to output to hugegraph, allowed values are: [OLAP_COMMON, OLAP_SECONDARY, OLAP_RANGE].
    output.retry_interval10The retry interval when output failed
    output.retry_times3The retry times when output failed
    output.single_threads1The threads number used to single output
    output.thread_pool_shutdown_timeout60The timeout seconds of output threads pool shutdown
    output.with_adjacent_edgesfalseOutput the adjacent edges of the vertex or not
    output.with_edge_propertiesfalseOutput the properties of the edge or not
    output.with_vertex_propertiesfalseOutput the properties of the vertex or not
    sort.thread_nums4The number of threads performing internal sorting.
    transport.client_connect_timeout3000The timeout(in ms) of client connect to server.
    transport.client_threads4The number of transport threads for client.
    transport.close_timeout10000The timeout(in ms) of close server or close client.
    transport.finish_session_timeout0The timeout(in ms) to finish session, 0 means using (transport.sync_request_timeout * transport.max_pending_requests).
    transport.heartbeat_interval20000The minimum interval(in ms) between heartbeats on client side.
    transport.io_modeAUTOThe network IO Mode, either ‘NIO’, ‘EPOLL’, ‘AUTO’, the ‘AUTO’ means selecting the property mode automatically.
    transport.max_pending_requests8The max number of client unreceived ack, it will trigger the sending unavailable if the number of unreceived ack >= max_pending_requests.
    transport.max_syn_backlog511The capacity of SYN queue on server side, 0 means using system default value.
    transport.max_timeout_heartbeat_count120The maximum times of timeout heartbeat on client side, if the number of timeouts waiting for heartbeat response continuously > max_heartbeat_timeouts the channel will be closed from client side.
    transport.min_ack_interval200The minimum interval(in ms) of server reply ack.
    transport.min_pending_requests6The minimum number of client unreceived ack, it will trigger the sending available if the number of unreceived ack < min_pending_requests.
    transport.network_retries3The number of retry attempts for network communication,if network unstable.
    transport.provider_classorg.apache.hugegraph.computer.core.network.netty.NettyTransportProviderThe transport provider, currently only supports Netty.
    transport.receive_buffer_size0The size of socket receive-buffer in bytes, 0 means using system default value.
    transport.recv_file_modetrueWhether enable receive buffer-file mode, it will receive buffer write file from socket by zero-copy if enable.
    transport.send_buffer_size0The size of socket send-buffer in bytes, 0 means using system default value.
    transport.server_host127.0.0.1The server hostname or ip to listen on to transfer data.
    transport.server_idle_timeout360000The max timeout(in ms) of server idle.
    transport.server_port0The server port to listen on to transfer data. The system will assign a random port if it’s set to 0.
    transport.server_threads4The number of transport threads for server.
    transport.sync_request_timeout10000The timeout(in ms) to wait response after sending sync-request.
    transport.tcp_keep_alivetrueWhether enable TCP keep-alive.
    transport.transport_epoll_ltfalseWhether enable EPOLL level-trigger.
    transport.write_buffer_high_mark67108864The high water mark for write buffer in bytes, it will trigger the sending unavailable if the number of queued bytes > write_buffer_high_mark.
    transport.write_buffer_low_mark33554432The low water mark for write buffer in bytes, it will trigger the sending available if the number of queued bytes < write_buffer_low_mark.org.apache.hugegraph.config.OptionChecker$$Lambda$97/0x00000008001c8440@776a6d9b
    transport.write_socket_timeout3000The timeout(in ms) to write data to socket buffer.
    valuefile.max_segment_size1073741824The max number of bytes in each segment of value-file.
    worker.combiner_classorg.apache.hugegraph.computer.core.config.NullCombiner can combine messages into one value for a vertex, for example page-rank algorithm can combine messages of a vertex to a sum value.
    worker.computation_classorg.apache.hugegraph.computer.core.config.NullThe class to create worker-computation object, worker-computation is used to compute each vertex in each superstep.
    worker.data_dirs[jobs]The directories separated by ‘,’ that received vertices and messages can persist into.
    worker.edge_properties_combiner_classorg.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombinerThe combiner can combine several properties of the same edge into one properties at inputstep.
    worker.partitionerorg.apache.hugegraph.computer.core.graph.partition.HashPartitionerThe partitioner that decides which partition a vertex should be in, and which worker a partition should be in.
    worker.received_buffers_bytes_limit104857600The limit bytes of buffers of received data, the total size of all buffers can’t excess this limit. If received buffers reach this limit, they will be merged into a file.
    worker.vertex_properties_combiner_classorg.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombinerThe combiner can combine several properties of the same vertex into one properties at inputstep.
    worker.wait_finish_messages_timeout86400000The max timeout(in ms) message-handler wait for finish-message of all workers.
    worker.wait_sort_timeout600000The max timeout(in ms) message-handler wait for sort-thread to sort one batch of buffers.
    worker.write_buffer_capacity52428800The initial size of write buffer that used to store vertex or message.
    worker.write_buffer_threshold52428800The threshold of write buffer, exceeding it will trigger sorting, the write buffer is used to store vertex or message.

    K8s Operator Config Options

    NOTE: Option needs to be converted through environment variable settings, e.g k8s.internal_etcd_url => INTERNAL_ETCD_URL

    config optiondefault valuedescription
    k8s.auto_destroy_podtrueWhether to automatically destroy all pods when the job is completed or failed.
    k8s.close_reconciler_timeout120The max timeout(in ms) to close reconciler.
    k8s.internal_etcd_urlhttp://127.0.0.1:2379The internal etcd url for operator system.
    k8s.max_reconcile_retry3The max retry times of reconcile.
    k8s.probe_backlog50The maximum backlog for serving health probes.
    k8s.probe_port9892The value is the port that the controller bind to for serving health probes.
    k8s.ready_check_internal1000The time interval(ms) of check ready.
    k8s.ready_timeout30000The max timeout(in ms) of check ready.
    k8s.reconciler_count10The max number of reconciler thread.
    k8s.resync_period600000The minimum frequency at which watched resources are reconciled.
    k8s.timezoneAsia/ShanghaiThe timezone of computer job and operator.
    k8s.watch_namespacehugegraph-computer-systemThe value is watch custom resources in the namespace, ignore other namespaces, the ‘*’ means is all namespaces will be watched.

    HugeGraph-Computer CRD

    CRD: https://github.com/apache/hugegraph-computer/blob/master/computer-k8s-operator/manifest/hugegraph-computer-crd.v1.yaml

    specdefault valuedescriptionrequired
    algorithmNameThe name of algorithm.true
    jobIdThe job id.true
    imageThe image of algorithm.true
    computerConfThe map of computer config options.true
    workerInstancesThe number of worker instances, it will instead the ‘job.workers_count’ option.true
    pullPolicyAlwaysThe pull-policy of image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policyfalse
    pullSecretsThe pull-secrets of Image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-podfalse
    masterCpuThe cpu limit of master, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpufalse
    workerCpuThe cpu limit of worker, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpufalse
    masterMemoryThe memory limit of master, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memoryfalse
    workerMemoryThe memory limit of worker, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memoryfalse
    log4jXmlThe content of log4j.xml for computer job.false
    jarFileThe jar path of computer algorithm.false
    remoteJarUriThe remote jar uri of computer algorithm, it will overlay algorithm image.false
    jvmOptionsThe java startup parameters of computer job.false
    envVarsplease refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/false
    envFromplease refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/false
    masterCommandbin/start-computer.shThe run command of master, equivalent to ‘Entrypoint’ field of Docker.false
    masterArgs["-r master", “-d k8s”]The run args of master, equivalent to ‘Cmd’ field of Docker.false
    workerCommandbin/start-computer.shThe run command of worker, equivalent to ‘Entrypoint’ field of Docker.false
    workerArgs["-r worker", “-d k8s”]The run args of worker, equivalent to ‘Cmd’ field of Docker.false
    volumesPlease refer to: https://kubernetes.io/docs/concepts/storage/volumes/false
    volumeMountsPlease refer to: https://kubernetes.io/docs/concepts/storage/volumes/false
    secretPathsThe map of k8s-secret name and mount path.false
    configMapPathsThe map of k8s-configmap name and mount path.false
    podTemplateSpecPlease refer to: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-template-v1/#PodTemplateSpecfalse
    securityContextPlease refer to: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/false

    KubeDriver Config Options

    config optiondefault valuedescription
    k8s.build_image_bash_pathThe path of command used to build image.
    k8s.enable_internal_algorithmtrueWhether enable internal algorithm.
    k8s.framework_image_urlhugegraph/hugegraph-computer:latestThe image url of computer framework.
    k8s.image_repository_passwordThe password for login image repository.
    k8s.image_repository_registryThe address for login image repository.
    k8s.image_repository_urlhugegraph/hugegraph-computerThe url of image repository.
    k8s.image_repository_usernameThe username for login image repository.
    k8s.internal_algorithm[pageRank]The name list of all internal algorithm.
    k8s.internal_algorithm_image_urlhugegraph/hugegraph-computer:latestThe image url of internal algorithm.
    k8s.jar_file_dir/cache/jars/The directory where the algorithm jar to upload location.
    k8s.kube_config~/.kube/configThe path of k8s config file.
    k8s.log4j_xml_pathThe log4j.xml path for computer job.
    k8s.namespacehugegraph-computer-systemThe namespace of hugegraph-computer system.
    k8s.pull_secret_names[]The names of pull-secret for pulling image.
    +

    client.truststore 是给客户端⽤的,其中保存着受信任的证书

    5 - HugeGraph-Computer 配置

    Computer Config Options

    config optiondefault valuedescription
    algorithm.message_classorg.apache.hugegraph.computer.core.config.NullThe class of message passed when compute vertex.
    algorithm.params_classorg.apache.hugegraph.computer.core.config.NullThe class used to transfer algorithms’ parameters before algorithm been run.
    algorithm.result_classorg.apache.hugegraph.computer.core.config.NullThe class of vertex’s value, the instance is used to store computation result for the vertex.
    allocator.max_vertices_per_thread10000Maximum number of vertices per thread processed in each memory allocator
    bsp.etcd_endpointshttp://localhost:2379The end points to access etcd.
    bsp.log_interval30000The log interval(in ms) to print the log while waiting bsp event.
    bsp.max_super_step10The max super step of the algorithm.
    bsp.register_timeout300000The max timeout to wait for master and works to register.
    bsp.wait_master_timeout86400000The max timeout(in ms) to wait for master bsp event.
    bsp.wait_workers_timeout86400000The max timeout to wait for workers bsp event.
    hgkv.max_data_block_size65536The max byte size of hgkv-file data block.
    hgkv.max_file_size2147483648The max number of bytes in each hgkv-file.
    hgkv.max_merge_files10The max number of files to merge at one time.
    hgkv.temp_file_dir/tmp/hgkvThis folder is used to store temporary files, temporary files will be generated during the file merging process.
    hugegraph.namehugegraphThe graph name to load data and write results back.
    hugegraph.urlhttp://127.0.0.1:8080The hugegraph url to load data and write results back.
    input.edge_directionOUTThe data of the edge in which direction is loaded, when the value is BOTH, the edges in both OUT and IN direction will be loaded.
    input.edge_freqMULTIPLEThe frequency of edges can exist between a pair of vertices, allowed values: [SINGLE, SINGLE_PER_LABEL, MULTIPLE]. SINGLE means that only one edge can exist between a pair of vertices, use sourceId + targetId to identify it; SINGLE_PER_LABEL means that each edge label can exist one edge between a pair of vertices, use sourceId + edgelabel + targetId to identify it; MULTIPLE means that many edge can exist between a pair of vertices, use sourceId + edgelabel + sortValues + targetId to identify it.
    input.filter_classorg.apache.hugegraph.computer.core.input.filter.DefaultInputFilterThe class to create input-filter object, input-filter is used to Filter vertex edges according to user needs.
    input.loader_schema_pathThe schema path of loader input, only takes effect when the input.source_type=loader is enabled
    input.loader_struct_pathThe struct path of loader input, only takes effect when the input.source_type=loader is enabled
    input.max_edges_in_one_vertex200The maximum number of adjacent edges allowed to be attached to a vertex, the adjacent edges will be stored and transferred together as a batch unit.
    input.source_typehugegraph-serverThe source type to load input data, allowed values: [‘hugegraph-server’, ‘hugegraph-loader’], the ‘hugegraph-loader’ means use hugegraph-loader load data from HDFS or file, if use ‘hugegraph-loader’ load data then please config ‘input.loader_struct_path’ and ‘input.loader_schema_path’.
    input.split_fetch_timeout300The timeout in seconds to fetch input splits
    input.split_max_splits10000000The maximum number of input splits
    input.split_page_size500The page size for streamed load input split data
    input.split_size1048576The input split size in bytes
    job.idlocal_0001The job id on Yarn cluster or K8s cluster.
    job.partitions_count1The partitions count for computing one graph algorithm job.
    job.partitions_thread_nums4The number of threads for partition parallel compute.
    job.workers_count1The workers count for computing one graph algorithm job.
    master.computation_classorg.apache.hugegraph.computer.core.master.DefaultMasterComputationMaster-computation is computation that can determine whether to continue next superstep. It runs at the end of each superstep on master.
    output.batch_size500The batch size of output
    output.batch_threads1The threads number used to batch output
    output.hdfs_core_site_pathThe hdfs core site path.
    output.hdfs_delimiter,The delimiter of hdfs output.
    output.hdfs_kerberos_enablefalseIs Kerberos authentication enabled for Hdfs.
    output.hdfs_kerberos_keytabThe Hdfs’s key tab file for kerberos authentication.
    output.hdfs_kerberos_principalThe Hdfs’s principal for kerberos authentication.
    output.hdfs_krb5_conf/etc/krb5.confKerberos configuration file.
    output.hdfs_merge_partitionstrueWhether merge output files of multiple partitions.
    output.hdfs_path_prefix/hugegraph-computer/resultsThe directory of hdfs output result.
    output.hdfs_replication3The replication number of hdfs.
    output.hdfs_site_pathThe hdfs site path.
    output.hdfs_urlhdfs://127.0.0.1:9000The hdfs url of output.
    output.hdfs_userhadoopThe hdfs user of output.
    output.output_classorg.apache.hugegraph.computer.core.output.LogOutputThe class to output the computation result of each vertex. Be called after iteration computation.
    output.result_namevalueThe value is assigned dynamically by #name() of instance created by WORKER_COMPUTATION_CLASS.
    output.result_write_typeOLAP_COMMONThe result write-type to output to hugegraph, allowed values are: [OLAP_COMMON, OLAP_SECONDARY, OLAP_RANGE].
    output.retry_interval10The retry interval when output failed
    output.retry_times3The retry times when output failed
    output.single_threads1The threads number used to single output
    output.thread_pool_shutdown_timeout60The timeout seconds of output threads pool shutdown
    output.with_adjacent_edgesfalseOutput the adjacent edges of the vertex or not
    output.with_edge_propertiesfalseOutput the properties of the edge or not
    output.with_vertex_propertiesfalseOutput the properties of the vertex or not
    sort.thread_nums4The number of threads performing internal sorting.
    transport.client_connect_timeout3000The timeout(in ms) of client connect to server.
    transport.client_threads4The number of transport threads for client.
    transport.close_timeout10000The timeout(in ms) of close server or close client.
    transport.finish_session_timeout0The timeout(in ms) to finish session, 0 means using (transport.sync_request_timeout * transport.max_pending_requests).
    transport.heartbeat_interval20000The minimum interval(in ms) between heartbeats on client side.
    transport.io_modeAUTOThe network IO Mode, either ‘NIO’, ‘EPOLL’, ‘AUTO’, the ‘AUTO’ means selecting the property mode automatically.
    transport.max_pending_requests8The max number of client unreceived ack, it will trigger the sending unavailable if the number of unreceived ack >= max_pending_requests.
    transport.max_syn_backlog511The capacity of SYN queue on server side, 0 means using system default value.
    transport.max_timeout_heartbeat_count120The maximum times of timeout heartbeat on client side, if the number of timeouts waiting for heartbeat response continuously > max_heartbeat_timeouts the channel will be closed from client side.
    transport.min_ack_interval200The minimum interval(in ms) of server reply ack.
    transport.min_pending_requests6The minimum number of client unreceived ack, it will trigger the sending available if the number of unreceived ack < min_pending_requests.
    transport.network_retries3The number of retry attempts for network communication,if network unstable.
    transport.provider_classorg.apache.hugegraph.computer.core.network.netty.NettyTransportProviderThe transport provider, currently only supports Netty.
    transport.receive_buffer_size0The size of socket receive-buffer in bytes, 0 means using system default value.
    transport.recv_file_modetrueWhether enable receive buffer-file mode, it will receive buffer write file from socket by zero-copy if enable.
    transport.send_buffer_size0The size of socket send-buffer in bytes, 0 means using system default value.
    transport.server_host127.0.0.1The server hostname or ip to listen on to transfer data.
    transport.server_idle_timeout360000The max timeout(in ms) of server idle.
    transport.server_port0The server port to listen on to transfer data. The system will assign a random port if it’s set to 0.
    transport.server_threads4The number of transport threads for server.
    transport.sync_request_timeout10000The timeout(in ms) to wait response after sending sync-request.
    transport.tcp_keep_alivetrueWhether enable TCP keep-alive.
    transport.transport_epoll_ltfalseWhether enable EPOLL level-trigger.
    transport.write_buffer_high_mark67108864The high water mark for write buffer in bytes, it will trigger the sending unavailable if the number of queued bytes > write_buffer_high_mark.
    transport.write_buffer_low_mark33554432The low water mark for write buffer in bytes, it will trigger the sending available if the number of queued bytes < write_buffer_low_mark.org.apache.hugegraph.config.OptionChecker$$Lambda$97/0x00000008001c8440@776a6d9b
    transport.write_socket_timeout3000The timeout(in ms) to write data to socket buffer.
    valuefile.max_segment_size1073741824The max number of bytes in each segment of value-file.
    worker.combiner_classorg.apache.hugegraph.computer.core.config.NullCombiner can combine messages into one value for a vertex, for example page-rank algorithm can combine messages of a vertex to a sum value.
    worker.computation_classorg.apache.hugegraph.computer.core.config.NullThe class to create worker-computation object, worker-computation is used to compute each vertex in each superstep.
    worker.data_dirs[jobs]The directories separated by ‘,’ that received vertices and messages can persist into.
    worker.edge_properties_combiner_classorg.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombinerThe combiner can combine several properties of the same edge into one properties at inputstep.
    worker.partitionerorg.apache.hugegraph.computer.core.graph.partition.HashPartitionerThe partitioner that decides which partition a vertex should be in, and which worker a partition should be in.
    worker.received_buffers_bytes_limit104857600The limit bytes of buffers of received data, the total size of all buffers can’t excess this limit. If received buffers reach this limit, they will be merged into a file.
    worker.vertex_properties_combiner_classorg.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombinerThe combiner can combine several properties of the same vertex into one properties at inputstep.
    worker.wait_finish_messages_timeout86400000The max timeout(in ms) message-handler wait for finish-message of all workers.
    worker.wait_sort_timeout600000The max timeout(in ms) message-handler wait for sort-thread to sort one batch of buffers.
    worker.write_buffer_capacity52428800The initial size of write buffer that used to store vertex or message.
    worker.write_buffer_threshold52428800The threshold of write buffer, exceeding it will trigger sorting, the write buffer is used to store vertex or message.

    K8s Operator Config Options

    NOTE: Option needs to be converted through environment variable settings, e.g k8s.internal_etcd_url => INTERNAL_ETCD_URL

    config optiondefault valuedescription
    k8s.auto_destroy_podtrueWhether to automatically destroy all pods when the job is completed or failed.
    k8s.close_reconciler_timeout120The max timeout(in ms) to close reconciler.
    k8s.internal_etcd_urlhttp://127.0.0.1:2379The internal etcd url for operator system.
    k8s.max_reconcile_retry3The max retry times of reconcile.
    k8s.probe_backlog50The maximum backlog for serving health probes.
    k8s.probe_port9892The value is the port that the controller bind to for serving health probes.
    k8s.ready_check_internal1000The time interval(ms) of check ready.
    k8s.ready_timeout30000The max timeout(in ms) of check ready.
    k8s.reconciler_count10The max number of reconciler thread.
    k8s.resync_period600000The minimum frequency at which watched resources are reconciled.
    k8s.timezoneAsia/ShanghaiThe timezone of computer job and operator.
    k8s.watch_namespacehugegraph-computer-systemThe value is watch custom resources in the namespace, ignore other namespaces, the ‘*’ means is all namespaces will be watched.

    HugeGraph-Computer CRD

    CRD: https://github.com/apache/hugegraph-computer/blob/master/computer-k8s-operator/manifest/hugegraph-computer-crd.v1.yaml

    specdefault valuedescriptionrequired
    algorithmNameThe name of algorithm.true
    jobIdThe job id.true
    imageThe image of algorithm.true
    computerConfThe map of computer config options.true
    workerInstancesThe number of worker instances, it will instead the ‘job.workers_count’ option.true
    pullPolicyAlwaysThe pull-policy of image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policyfalse
    pullSecretsThe pull-secrets of Image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-podfalse
    masterCpuThe cpu limit of master, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpufalse
    workerCpuThe cpu limit of worker, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpufalse
    masterMemoryThe memory limit of master, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memoryfalse
    workerMemoryThe memory limit of worker, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memoryfalse
    log4jXmlThe content of log4j.xml for computer job.false
    jarFileThe jar path of computer algorithm.false
    remoteJarUriThe remote jar uri of computer algorithm, it will overlay algorithm image.false
    jvmOptionsThe java startup parameters of computer job.false
    envVarsplease refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/false
    envFromplease refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/false
    masterCommandbin/start-computer.shThe run command of master, equivalent to ‘Entrypoint’ field of Docker.false
    masterArgs["-r master", “-d k8s”]The run args of master, equivalent to ‘Cmd’ field of Docker.false
    workerCommandbin/start-computer.shThe run command of worker, equivalent to ‘Entrypoint’ field of Docker.false
    workerArgs["-r worker", “-d k8s”]The run args of worker, equivalent to ‘Cmd’ field of Docker.false
    volumesPlease refer to: https://kubernetes.io/docs/concepts/storage/volumes/false
    volumeMountsPlease refer to: https://kubernetes.io/docs/concepts/storage/volumes/false
    secretPathsThe map of k8s-secret name and mount path.false
    configMapPathsThe map of k8s-configmap name and mount path.false
    podTemplateSpecPlease refer to: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-template-v1/#PodTemplateSpecfalse
    securityContextPlease refer to: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/false

    KubeDriver Config Options

    config optiondefault valuedescription
    k8s.build_image_bash_pathThe path of command used to build image.
    k8s.enable_internal_algorithmtrueWhether enable internal algorithm.
    k8s.framework_image_urlhugegraph/hugegraph-computer:latestThe image url of computer framework.
    k8s.image_repository_passwordThe password for login image repository.
    k8s.image_repository_registryThe address for login image repository.
    k8s.image_repository_urlhugegraph/hugegraph-computerThe url of image repository.
    k8s.image_repository_usernameThe username for login image repository.
    k8s.internal_algorithm[pageRank]The name list of all internal algorithm.
    k8s.internal_algorithm_image_urlhugegraph/hugegraph-computer:latestThe image url of internal algorithm.
    k8s.jar_file_dir/cache/jars/The directory where the algorithm jar to upload location.
    k8s.kube_config~/.kube/configThe path of k8s config file.
    k8s.log4j_xml_pathThe log4j.xml path for computer job.
    k8s.namespacehugegraph-computer-systemThe namespace of hugegraph-computer system.
    k8s.pull_secret_names[]The names of pull-secret for pulling image.
    diff --git a/cn/docs/config/config-authentication/index.html b/cn/docs/config/config-authentication/index.html index 88306069a..ad369cc79 100644 --- a/cn/docs/config/config-authentication/index.html +++ b/cn/docs/config/config-authentication/index.html @@ -54,7 +54,7 @@ auth.admin_token=token-value-a auth.user_tokens=[hugegraph1:token-value-1, hugegraph2:token-value-2]

    在配置文件hugegraph{n}.properties中配置gremlin.graph信息:

    gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
    -

    自定义用户认证系统

    如果需要支持更加灵活的用户系统,可自定义authenticator进行扩展,自定义authenticator实现接口com.baidu.hugegraph.auth.HugeAuthenticator即可,然后修改配置文件中authenticator配置项指向该实现。


    Last modified April 17, 2022: rebuild doc (ef36544)
    +

    自定义用户认证系统

    如果需要支持更加灵活的用户系统,可自定义authenticator进行扩展,自定义authenticator实现接口com.baidu.hugegraph.auth.HugeAuthenticator即可,然后修改配置文件中authenticator配置项指向该实现。


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/config/config-computer/index.html b/cn/docs/config/config-computer/index.html index eda61bfcb..cf805f443 100644 --- a/cn/docs/config/config-computer/index.html +++ b/cn/docs/config/config-computer/index.html @@ -17,7 +17,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph-Computer 配置

    Computer Config Options

    config optiondefault valuedescription
    algorithm.message_classorg.apache.hugegraph.computer.core.config.NullThe class of message passed when compute vertex.
    algorithm.params_classorg.apache.hugegraph.computer.core.config.NullThe class used to transfer algorithms’ parameters before algorithm been run.
    algorithm.result_classorg.apache.hugegraph.computer.core.config.NullThe class of vertex’s value, the instance is used to store computation result for the vertex.
    allocator.max_vertices_per_thread10000Maximum number of vertices per thread processed in each memory allocator
    bsp.etcd_endpointshttp://localhost:2379The end points to access etcd.
    bsp.log_interval30000The log interval(in ms) to print the log while waiting bsp event.
    bsp.max_super_step10The max super step of the algorithm.
    bsp.register_timeout300000The max timeout to wait for master and works to register.
    bsp.wait_master_timeout86400000The max timeout(in ms) to wait for master bsp event.
    bsp.wait_workers_timeout86400000The max timeout to wait for workers bsp event.
    hgkv.max_data_block_size65536The max byte size of hgkv-file data block.
    hgkv.max_file_size2147483648The max number of bytes in each hgkv-file.
    hgkv.max_merge_files10The max number of files to merge at one time.
    hgkv.temp_file_dir/tmp/hgkvThis folder is used to store temporary files, temporary files will be generated during the file merging process.
    hugegraph.namehugegraphThe graph name to load data and write results back.
    hugegraph.urlhttp://127.0.0.1:8080The hugegraph url to load data and write results back.
    input.edge_directionOUTThe data of the edge in which direction is loaded, when the value is BOTH, the edges in both OUT and IN direction will be loaded.
    input.edge_freqMULTIPLEThe frequency of edges can exist between a pair of vertices, allowed values: [SINGLE, SINGLE_PER_LABEL, MULTIPLE]. SINGLE means that only one edge can exist between a pair of vertices, use sourceId + targetId to identify it; SINGLE_PER_LABEL means that each edge label can exist one edge between a pair of vertices, use sourceId + edgelabel + targetId to identify it; MULTIPLE means that many edge can exist between a pair of vertices, use sourceId + edgelabel + sortValues + targetId to identify it.
    input.filter_classorg.apache.hugegraph.computer.core.input.filter.DefaultInputFilterThe class to create input-filter object, input-filter is used to Filter vertex edges according to user needs.
    input.loader_schema_pathThe schema path of loader input, only takes effect when the input.source_type=loader is enabled
    input.loader_struct_pathThe struct path of loader input, only takes effect when the input.source_type=loader is enabled
    input.max_edges_in_one_vertex200The maximum number of adjacent edges allowed to be attached to a vertex, the adjacent edges will be stored and transferred together as a batch unit.
    input.source_typehugegraph-serverThe source type to load input data, allowed values: [‘hugegraph-server’, ‘hugegraph-loader’], the ‘hugegraph-loader’ means use hugegraph-loader load data from HDFS or file, if use ‘hugegraph-loader’ load data then please config ‘input.loader_struct_path’ and ‘input.loader_schema_path’.
    input.split_fetch_timeout300The timeout in seconds to fetch input splits
    input.split_max_splits10000000The maximum number of input splits
    input.split_page_size500The page size for streamed load input split data
    input.split_size1048576The input split size in bytes
    job.idlocal_0001The job id on Yarn cluster or K8s cluster.
    job.partitions_count1The partitions count for computing one graph algorithm job.
    job.partitions_thread_nums4The number of threads for partition parallel compute.
    job.workers_count1The workers count for computing one graph algorithm job.
    master.computation_classorg.apache.hugegraph.computer.core.master.DefaultMasterComputationMaster-computation is computation that can determine whether to continue next superstep. It runs at the end of each superstep on master.
    output.batch_size500The batch size of output
    output.batch_threads1The threads number used to batch output
    output.hdfs_core_site_pathThe hdfs core site path.
    output.hdfs_delimiter,The delimiter of hdfs output.
    output.hdfs_kerberos_enablefalseIs Kerberos authentication enabled for Hdfs.
    output.hdfs_kerberos_keytabThe Hdfs’s key tab file for kerberos authentication.
    output.hdfs_kerberos_principalThe Hdfs’s principal for kerberos authentication.
    output.hdfs_krb5_conf/etc/krb5.confKerberos configuration file.
    output.hdfs_merge_partitionstrueWhether merge output files of multiple partitions.
    output.hdfs_path_prefix/hugegraph-computer/resultsThe directory of hdfs output result.
    output.hdfs_replication3The replication number of hdfs.
    output.hdfs_site_pathThe hdfs site path.
    output.hdfs_urlhdfs://127.0.0.1:9000The hdfs url of output.
    output.hdfs_userhadoopThe hdfs user of output.
    output.output_classorg.apache.hugegraph.computer.core.output.LogOutputThe class to output the computation result of each vertex. Be called after iteration computation.
    output.result_namevalueThe value is assigned dynamically by #name() of instance created by WORKER_COMPUTATION_CLASS.
    output.result_write_typeOLAP_COMMONThe result write-type to output to hugegraph, allowed values are: [OLAP_COMMON, OLAP_SECONDARY, OLAP_RANGE].
    output.retry_interval10The retry interval when output failed
    output.retry_times3The retry times when output failed
    output.single_threads1The threads number used to single output
    output.thread_pool_shutdown_timeout60The timeout seconds of output threads pool shutdown
    output.with_adjacent_edgesfalseOutput the adjacent edges of the vertex or not
    output.with_edge_propertiesfalseOutput the properties of the edge or not
    output.with_vertex_propertiesfalseOutput the properties of the vertex or not
    sort.thread_nums4The number of threads performing internal sorting.
    transport.client_connect_timeout3000The timeout(in ms) of client connect to server.
    transport.client_threads4The number of transport threads for client.
    transport.close_timeout10000The timeout(in ms) of close server or close client.
    transport.finish_session_timeout0The timeout(in ms) to finish session, 0 means using (transport.sync_request_timeout * transport.max_pending_requests).
    transport.heartbeat_interval20000The minimum interval(in ms) between heartbeats on client side.
    transport.io_modeAUTOThe network IO Mode, either ‘NIO’, ‘EPOLL’, ‘AUTO’, the ‘AUTO’ means selecting the property mode automatically.
    transport.max_pending_requests8The max number of client unreceived ack, it will trigger the sending unavailable if the number of unreceived ack >= max_pending_requests.
    transport.max_syn_backlog511The capacity of SYN queue on server side, 0 means using system default value.
    transport.max_timeout_heartbeat_count120The maximum times of timeout heartbeat on client side, if the number of timeouts waiting for heartbeat response continuously > max_heartbeat_timeouts the channel will be closed from client side.
    transport.min_ack_interval200The minimum interval(in ms) of server reply ack.
    transport.min_pending_requests6The minimum number of client unreceived ack, it will trigger the sending available if the number of unreceived ack < min_pending_requests.
    transport.network_retries3The number of retry attempts for network communication,if network unstable.
    transport.provider_classorg.apache.hugegraph.computer.core.network.netty.NettyTransportProviderThe transport provider, currently only supports Netty.
    transport.receive_buffer_size0The size of socket receive-buffer in bytes, 0 means using system default value.
    transport.recv_file_modetrueWhether enable receive buffer-file mode, it will receive buffer write file from socket by zero-copy if enable.
    transport.send_buffer_size0The size of socket send-buffer in bytes, 0 means using system default value.
    transport.server_host127.0.0.1The server hostname or ip to listen on to transfer data.
    transport.server_idle_timeout360000The max timeout(in ms) of server idle.
    transport.server_port0The server port to listen on to transfer data. The system will assign a random port if it’s set to 0.
    transport.server_threads4The number of transport threads for server.
    transport.sync_request_timeout10000The timeout(in ms) to wait response after sending sync-request.
    transport.tcp_keep_alivetrueWhether enable TCP keep-alive.
    transport.transport_epoll_ltfalseWhether enable EPOLL level-trigger.
    transport.write_buffer_high_mark67108864The high water mark for write buffer in bytes, it will trigger the sending unavailable if the number of queued bytes > write_buffer_high_mark.
    transport.write_buffer_low_mark33554432The low water mark for write buffer in bytes, it will trigger the sending available if the number of queued bytes < write_buffer_low_mark.org.apache.hugegraph.config.OptionChecker$$Lambda$97/0x00000008001c8440@776a6d9b
    transport.write_socket_timeout3000The timeout(in ms) to write data to socket buffer.
    valuefile.max_segment_size1073741824The max number of bytes in each segment of value-file.
    worker.combiner_classorg.apache.hugegraph.computer.core.config.NullCombiner can combine messages into one value for a vertex, for example page-rank algorithm can combine messages of a vertex to a sum value.
    worker.computation_classorg.apache.hugegraph.computer.core.config.NullThe class to create worker-computation object, worker-computation is used to compute each vertex in each superstep.
    worker.data_dirs[jobs]The directories separated by ‘,’ that received vertices and messages can persist into.
    worker.edge_properties_combiner_classorg.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombinerThe combiner can combine several properties of the same edge into one properties at inputstep.
    worker.partitionerorg.apache.hugegraph.computer.core.graph.partition.HashPartitionerThe partitioner that decides which partition a vertex should be in, and which worker a partition should be in.
    worker.received_buffers_bytes_limit104857600The limit bytes of buffers of received data, the total size of all buffers can’t excess this limit. If received buffers reach this limit, they will be merged into a file.
    worker.vertex_properties_combiner_classorg.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombinerThe combiner can combine several properties of the same vertex into one properties at inputstep.
    worker.wait_finish_messages_timeout86400000The max timeout(in ms) message-handler wait for finish-message of all workers.
    worker.wait_sort_timeout600000The max timeout(in ms) message-handler wait for sort-thread to sort one batch of buffers.
    worker.write_buffer_capacity52428800The initial size of write buffer that used to store vertex or message.
    worker.write_buffer_threshold52428800The threshold of write buffer, exceeding it will trigger sorting, the write buffer is used to store vertex or message.

    K8s Operator Config Options

    NOTE: Option needs to be converted through environment variable settings, e.g k8s.internal_etcd_url => INTERNAL_ETCD_URL

    config optiondefault valuedescription
    k8s.auto_destroy_podtrueWhether to automatically destroy all pods when the job is completed or failed.
    k8s.close_reconciler_timeout120The max timeout(in ms) to close reconciler.
    k8s.internal_etcd_urlhttp://127.0.0.1:2379The internal etcd url for operator system.
    k8s.max_reconcile_retry3The max retry times of reconcile.
    k8s.probe_backlog50The maximum backlog for serving health probes.
    k8s.probe_port9892The value is the port that the controller bind to for serving health probes.
    k8s.ready_check_internal1000The time interval(ms) of check ready.
    k8s.ready_timeout30000The max timeout(in ms) of check ready.
    k8s.reconciler_count10The max number of reconciler thread.
    k8s.resync_period600000The minimum frequency at which watched resources are reconciled.
    k8s.timezoneAsia/ShanghaiThe timezone of computer job and operator.
    k8s.watch_namespacehugegraph-computer-systemThe value is watch custom resources in the namespace, ignore other namespaces, the ‘*’ means is all namespaces will be watched.

    HugeGraph-Computer CRD

    CRD: https://github.com/apache/hugegraph-computer/blob/master/computer-k8s-operator/manifest/hugegraph-computer-crd.v1.yaml

    specdefault valuedescriptionrequired
    algorithmNameThe name of algorithm.true
    jobIdThe job id.true
    imageThe image of algorithm.true
    computerConfThe map of computer config options.true
    workerInstancesThe number of worker instances, it will instead the ‘job.workers_count’ option.true
    pullPolicyAlwaysThe pull-policy of image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policyfalse
    pullSecretsThe pull-secrets of Image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-podfalse
    masterCpuThe cpu limit of master, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpufalse
    workerCpuThe cpu limit of worker, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpufalse
    masterMemoryThe memory limit of master, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memoryfalse
    workerMemoryThe memory limit of worker, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memoryfalse
    log4jXmlThe content of log4j.xml for computer job.false
    jarFileThe jar path of computer algorithm.false
    remoteJarUriThe remote jar uri of computer algorithm, it will overlay algorithm image.false
    jvmOptionsThe java startup parameters of computer job.false
    envVarsplease refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/false
    envFromplease refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/false
    masterCommandbin/start-computer.shThe run command of master, equivalent to ‘Entrypoint’ field of Docker.false
    masterArgs["-r master", “-d k8s”]The run args of master, equivalent to ‘Cmd’ field of Docker.false
    workerCommandbin/start-computer.shThe run command of worker, equivalent to ‘Entrypoint’ field of Docker.false
    workerArgs["-r worker", “-d k8s”]The run args of worker, equivalent to ‘Cmd’ field of Docker.false
    volumesPlease refer to: https://kubernetes.io/docs/concepts/storage/volumes/false
    volumeMountsPlease refer to: https://kubernetes.io/docs/concepts/storage/volumes/false
    secretPathsThe map of k8s-secret name and mount path.false
    configMapPathsThe map of k8s-configmap name and mount path.false
    podTemplateSpecPlease refer to: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-template-v1/#PodTemplateSpecfalse
    securityContextPlease refer to: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/false

    KubeDriver Config Options

    config optiondefault valuedescription
    k8s.build_image_bash_pathThe path of command used to build image.
    k8s.enable_internal_algorithmtrueWhether enable internal algorithm.
    k8s.framework_image_urlhugegraph/hugegraph-computer:latestThe image url of computer framework.
    k8s.image_repository_passwordThe password for login image repository.
    k8s.image_repository_registryThe address for login image repository.
    k8s.image_repository_urlhugegraph/hugegraph-computerThe url of image repository.
    k8s.image_repository_usernameThe username for login image repository.
    k8s.internal_algorithm[pageRank]The name list of all internal algorithm.
    k8s.internal_algorithm_image_urlhugegraph/hugegraph-computer:latestThe image url of internal algorithm.
    k8s.jar_file_dir/cache/jars/The directory where the algorithm jar to upload location.
    k8s.kube_config~/.kube/configThe path of k8s config file.
    k8s.log4j_xml_pathThe log4j.xml path for computer job.
    k8s.namespacehugegraph-computer-systemThe namespace of hugegraph-computer system.
    k8s.pull_secret_names[]The names of pull-secret for pulling image.

    Last modified November 28, 2022: improve computer doc (#157) (862b048)
    + Print entire section

    HugeGraph-Computer 配置

    Computer Config Options

    config optiondefault valuedescription
    algorithm.message_classorg.apache.hugegraph.computer.core.config.NullThe class of message passed when compute vertex.
    algorithm.params_classorg.apache.hugegraph.computer.core.config.NullThe class used to transfer algorithms’ parameters before algorithm been run.
    algorithm.result_classorg.apache.hugegraph.computer.core.config.NullThe class of vertex’s value, the instance is used to store computation result for the vertex.
    allocator.max_vertices_per_thread10000Maximum number of vertices per thread processed in each memory allocator
    bsp.etcd_endpointshttp://localhost:2379The end points to access etcd.
    bsp.log_interval30000The log interval(in ms) to print the log while waiting bsp event.
    bsp.max_super_step10The max super step of the algorithm.
    bsp.register_timeout300000The max timeout to wait for master and works to register.
    bsp.wait_master_timeout86400000The max timeout(in ms) to wait for master bsp event.
    bsp.wait_workers_timeout86400000The max timeout to wait for workers bsp event.
    hgkv.max_data_block_size65536The max byte size of hgkv-file data block.
    hgkv.max_file_size2147483648The max number of bytes in each hgkv-file.
    hgkv.max_merge_files10The max number of files to merge at one time.
    hgkv.temp_file_dir/tmp/hgkvThis folder is used to store temporary files, temporary files will be generated during the file merging process.
    hugegraph.namehugegraphThe graph name to load data and write results back.
    hugegraph.urlhttp://127.0.0.1:8080The hugegraph url to load data and write results back.
    input.edge_directionOUTThe data of the edge in which direction is loaded, when the value is BOTH, the edges in both OUT and IN direction will be loaded.
    input.edge_freqMULTIPLEThe frequency of edges can exist between a pair of vertices, allowed values: [SINGLE, SINGLE_PER_LABEL, MULTIPLE]. SINGLE means that only one edge can exist between a pair of vertices, use sourceId + targetId to identify it; SINGLE_PER_LABEL means that each edge label can exist one edge between a pair of vertices, use sourceId + edgelabel + targetId to identify it; MULTIPLE means that many edge can exist between a pair of vertices, use sourceId + edgelabel + sortValues + targetId to identify it.
    input.filter_classorg.apache.hugegraph.computer.core.input.filter.DefaultInputFilterThe class to create input-filter object, input-filter is used to Filter vertex edges according to user needs.
    input.loader_schema_pathThe schema path of loader input, only takes effect when the input.source_type=loader is enabled
    input.loader_struct_pathThe struct path of loader input, only takes effect when the input.source_type=loader is enabled
    input.max_edges_in_one_vertex200The maximum number of adjacent edges allowed to be attached to a vertex, the adjacent edges will be stored and transferred together as a batch unit.
    input.source_typehugegraph-serverThe source type to load input data, allowed values: [‘hugegraph-server’, ‘hugegraph-loader’], the ‘hugegraph-loader’ means use hugegraph-loader load data from HDFS or file, if use ‘hugegraph-loader’ load data then please config ‘input.loader_struct_path’ and ‘input.loader_schema_path’.
    input.split_fetch_timeout300The timeout in seconds to fetch input splits
    input.split_max_splits10000000The maximum number of input splits
    input.split_page_size500The page size for streamed load input split data
    input.split_size1048576The input split size in bytes
    job.idlocal_0001The job id on Yarn cluster or K8s cluster.
    job.partitions_count1The partitions count for computing one graph algorithm job.
    job.partitions_thread_nums4The number of threads for partition parallel compute.
    job.workers_count1The workers count for computing one graph algorithm job.
    master.computation_classorg.apache.hugegraph.computer.core.master.DefaultMasterComputationMaster-computation is computation that can determine whether to continue next superstep. It runs at the end of each superstep on master.
    output.batch_size500The batch size of output
    output.batch_threads1The threads number used to batch output
    output.hdfs_core_site_pathThe hdfs core site path.
    output.hdfs_delimiter,The delimiter of hdfs output.
    output.hdfs_kerberos_enablefalseIs Kerberos authentication enabled for Hdfs.
    output.hdfs_kerberos_keytabThe Hdfs’s key tab file for kerberos authentication.
    output.hdfs_kerberos_principalThe Hdfs’s principal for kerberos authentication.
    output.hdfs_krb5_conf/etc/krb5.confKerberos configuration file.
    output.hdfs_merge_partitionstrueWhether merge output files of multiple partitions.
    output.hdfs_path_prefix/hugegraph-computer/resultsThe directory of hdfs output result.
    output.hdfs_replication3The replication number of hdfs.
    output.hdfs_site_pathThe hdfs site path.
    output.hdfs_urlhdfs://127.0.0.1:9000The hdfs url of output.
    output.hdfs_userhadoopThe hdfs user of output.
    output.output_classorg.apache.hugegraph.computer.core.output.LogOutputThe class to output the computation result of each vertex. Be called after iteration computation.
    output.result_namevalueThe value is assigned dynamically by #name() of instance created by WORKER_COMPUTATION_CLASS.
    output.result_write_typeOLAP_COMMONThe result write-type to output to hugegraph, allowed values are: [OLAP_COMMON, OLAP_SECONDARY, OLAP_RANGE].
    output.retry_interval10The retry interval when output failed
    output.retry_times3The retry times when output failed
    output.single_threads1The threads number used to single output
    output.thread_pool_shutdown_timeout60The timeout seconds of output threads pool shutdown
    output.with_adjacent_edgesfalseOutput the adjacent edges of the vertex or not
    output.with_edge_propertiesfalseOutput the properties of the edge or not
    output.with_vertex_propertiesfalseOutput the properties of the vertex or not
    sort.thread_nums4The number of threads performing internal sorting.
    transport.client_connect_timeout3000The timeout(in ms) of client connect to server.
    transport.client_threads4The number of transport threads for client.
    transport.close_timeout10000The timeout(in ms) of close server or close client.
    transport.finish_session_timeout0The timeout(in ms) to finish session, 0 means using (transport.sync_request_timeout * transport.max_pending_requests).
    transport.heartbeat_interval20000The minimum interval(in ms) between heartbeats on client side.
    transport.io_modeAUTOThe network IO Mode, either ‘NIO’, ‘EPOLL’, ‘AUTO’, the ‘AUTO’ means selecting the property mode automatically.
    transport.max_pending_requests8The max number of client unreceived ack, it will trigger the sending unavailable if the number of unreceived ack >= max_pending_requests.
    transport.max_syn_backlog511The capacity of SYN queue on server side, 0 means using system default value.
    transport.max_timeout_heartbeat_count120The maximum times of timeout heartbeat on client side, if the number of timeouts waiting for heartbeat response continuously > max_heartbeat_timeouts the channel will be closed from client side.
    transport.min_ack_interval200The minimum interval(in ms) of server reply ack.
    transport.min_pending_requests6The minimum number of client unreceived ack, it will trigger the sending available if the number of unreceived ack < min_pending_requests.
    transport.network_retries3The number of retry attempts for network communication,if network unstable.
    transport.provider_classorg.apache.hugegraph.computer.core.network.netty.NettyTransportProviderThe transport provider, currently only supports Netty.
    transport.receive_buffer_size0The size of socket receive-buffer in bytes, 0 means using system default value.
    transport.recv_file_modetrueWhether enable receive buffer-file mode, it will receive buffer write file from socket by zero-copy if enable.
    transport.send_buffer_size0The size of socket send-buffer in bytes, 0 means using system default value.
    transport.server_host127.0.0.1The server hostname or ip to listen on to transfer data.
    transport.server_idle_timeout360000The max timeout(in ms) of server idle.
    transport.server_port0The server port to listen on to transfer data. The system will assign a random port if it’s set to 0.
    transport.server_threads4The number of transport threads for server.
    transport.sync_request_timeout10000The timeout(in ms) to wait response after sending sync-request.
    transport.tcp_keep_alivetrueWhether enable TCP keep-alive.
    transport.transport_epoll_ltfalseWhether enable EPOLL level-trigger.
    transport.write_buffer_high_mark67108864The high water mark for write buffer in bytes, it will trigger the sending unavailable if the number of queued bytes > write_buffer_high_mark.
    transport.write_buffer_low_mark33554432The low water mark for write buffer in bytes, it will trigger the sending available if the number of queued bytes < write_buffer_low_mark.org.apache.hugegraph.config.OptionChecker$$Lambda$97/0x00000008001c8440@776a6d9b
    transport.write_socket_timeout3000The timeout(in ms) to write data to socket buffer.
    valuefile.max_segment_size1073741824The max number of bytes in each segment of value-file.
    worker.combiner_classorg.apache.hugegraph.computer.core.config.NullCombiner can combine messages into one value for a vertex, for example page-rank algorithm can combine messages of a vertex to a sum value.
    worker.computation_classorg.apache.hugegraph.computer.core.config.NullThe class to create worker-computation object, worker-computation is used to compute each vertex in each superstep.
    worker.data_dirs[jobs]The directories separated by ‘,’ that received vertices and messages can persist into.
    worker.edge_properties_combiner_classorg.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombinerThe combiner can combine several properties of the same edge into one properties at inputstep.
    worker.partitionerorg.apache.hugegraph.computer.core.graph.partition.HashPartitionerThe partitioner that decides which partition a vertex should be in, and which worker a partition should be in.
    worker.received_buffers_bytes_limit104857600The limit bytes of buffers of received data, the total size of all buffers can’t excess this limit. If received buffers reach this limit, they will be merged into a file.
    worker.vertex_properties_combiner_classorg.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombinerThe combiner can combine several properties of the same vertex into one properties at inputstep.
    worker.wait_finish_messages_timeout86400000The max timeout(in ms) message-handler wait for finish-message of all workers.
    worker.wait_sort_timeout600000The max timeout(in ms) message-handler wait for sort-thread to sort one batch of buffers.
    worker.write_buffer_capacity52428800The initial size of write buffer that used to store vertex or message.
    worker.write_buffer_threshold52428800The threshold of write buffer, exceeding it will trigger sorting, the write buffer is used to store vertex or message.

    K8s Operator Config Options

    NOTE: Option needs to be converted through environment variable settings, e.g k8s.internal_etcd_url => INTERNAL_ETCD_URL

    config optiondefault valuedescription
    k8s.auto_destroy_podtrueWhether to automatically destroy all pods when the job is completed or failed.
    k8s.close_reconciler_timeout120The max timeout(in ms) to close reconciler.
    k8s.internal_etcd_urlhttp://127.0.0.1:2379The internal etcd url for operator system.
    k8s.max_reconcile_retry3The max retry times of reconcile.
    k8s.probe_backlog50The maximum backlog for serving health probes.
    k8s.probe_port9892The value is the port that the controller bind to for serving health probes.
    k8s.ready_check_internal1000The time interval(ms) of check ready.
    k8s.ready_timeout30000The max timeout(in ms) of check ready.
    k8s.reconciler_count10The max number of reconciler thread.
    k8s.resync_period600000The minimum frequency at which watched resources are reconciled.
    k8s.timezoneAsia/ShanghaiThe timezone of computer job and operator.
    k8s.watch_namespacehugegraph-computer-systemThe value is watch custom resources in the namespace, ignore other namespaces, the ‘*’ means is all namespaces will be watched.

    HugeGraph-Computer CRD

    CRD: https://github.com/apache/hugegraph-computer/blob/master/computer-k8s-operator/manifest/hugegraph-computer-crd.v1.yaml

    specdefault valuedescriptionrequired
    algorithmNameThe name of algorithm.true
    jobIdThe job id.true
    imageThe image of algorithm.true
    computerConfThe map of computer config options.true
    workerInstancesThe number of worker instances, it will instead the ‘job.workers_count’ option.true
    pullPolicyAlwaysThe pull-policy of image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policyfalse
    pullSecretsThe pull-secrets of Image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-podfalse
    masterCpuThe cpu limit of master, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpufalse
    workerCpuThe cpu limit of worker, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpufalse
    masterMemoryThe memory limit of master, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memoryfalse
    workerMemoryThe memory limit of worker, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memoryfalse
    log4jXmlThe content of log4j.xml for computer job.false
    jarFileThe jar path of computer algorithm.false
    remoteJarUriThe remote jar uri of computer algorithm, it will overlay algorithm image.false
    jvmOptionsThe java startup parameters of computer job.false
    envVarsplease refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/false
    envFromplease refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/false
    masterCommandbin/start-computer.shThe run command of master, equivalent to ‘Entrypoint’ field of Docker.false
    masterArgs["-r master", “-d k8s”]The run args of master, equivalent to ‘Cmd’ field of Docker.false
    workerCommandbin/start-computer.shThe run command of worker, equivalent to ‘Entrypoint’ field of Docker.false
    workerArgs["-r worker", “-d k8s”]The run args of worker, equivalent to ‘Cmd’ field of Docker.false
    volumesPlease refer to: https://kubernetes.io/docs/concepts/storage/volumes/false
    volumeMountsPlease refer to: https://kubernetes.io/docs/concepts/storage/volumes/false
    secretPathsThe map of k8s-secret name and mount path.false
    configMapPathsThe map of k8s-configmap name and mount path.false
    podTemplateSpecPlease refer to: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-template-v1/#PodTemplateSpecfalse
    securityContextPlease refer to: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/false

    KubeDriver Config Options

    config optiondefault valuedescription
    k8s.build_image_bash_pathThe path of command used to build image.
    k8s.enable_internal_algorithmtrueWhether enable internal algorithm.
    k8s.framework_image_urlhugegraph/hugegraph-computer:latestThe image url of computer framework.
    k8s.image_repository_passwordThe password for login image repository.
    k8s.image_repository_registryThe address for login image repository.
    k8s.image_repository_urlhugegraph/hugegraph-computerThe url of image repository.
    k8s.image_repository_usernameThe username for login image repository.
    k8s.internal_algorithm[pageRank]The name list of all internal algorithm.
    k8s.internal_algorithm_image_urlhugegraph/hugegraph-computer:latestThe image url of internal algorithm.
    k8s.jar_file_dir/cache/jars/The directory where the algorithm jar to upload location.
    k8s.kube_config~/.kube/configThe path of k8s config file.
    k8s.log4j_xml_pathThe log4j.xml path for computer job.
    k8s.namespacehugegraph-computer-systemThe namespace of hugegraph-computer system.
    k8s.pull_secret_names[]The names of pull-secret for pulling image.

    Last modified November 28, 2022: improve computer doc (#157) (862b048)
    diff --git a/cn/docs/config/config-guide/index.html b/cn/docs/config/config-guide/index.html index bbb674830..bc7b91f57 100644 --- a/cn/docs/config/config-guide/index.html +++ b/cn/docs/config/config-guide/index.html @@ -223,7 +223,7 @@

    停止 Server,初始化执行 init-store.sh(为新的图创建数据库),重新启动 Server

    $ bin/stop-hugegraph.sh
     $ bin/init-store.sh
     $ bin/start-hugegraph.sh
    -

    Last modified April 17, 2022: rebuild doc (ef36544)
    +
    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/config/config-https/index.html b/cn/docs/config/config-https/index.html index 6990572f8..5898ff3e5 100644 --- a/cn/docs/config/config-https/index.html +++ b/cn/docs/config/config-https/index.html @@ -59,7 +59,7 @@ 国家代码:CN
    1. 根据服务端私钥,导出服务端证书
    keytool -export -alias serverkey -keystore server.keystore -file server.crt
     

    server.crt 就是服务端的证书

    客户端

    keytool -import -alias serverkey -file server.crt -keystore client.truststore
    -

    client.truststore 是给客户端⽤的,其中保存着受信任的证书


    Last modified April 17, 2022: rebuild doc (ef36544)
    +

    client.truststore 是给客户端⽤的,其中保存着受信任的证书


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/config/config-option/index.html b/cn/docs/config/config-option/index.html index 388dcbac1..0fa617c6b 100644 --- a/cn/docs/config/config-option/index.html +++ b/cn/docs/config/config-option/index.html @@ -24,7 +24,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph 配置项

    Gremlin Server 配置项

    对应配置文件gremlin-server.yaml

    config optiondefault valuedescription
    host127.0.0.1The host or ip of Gremlin Server.
    port8182The listening port of Gremlin Server.
    graphshugegraph: conf/hugegraph.propertiesThe map of graphs with name and config file path.
    scriptEvaluationTimeout30000The timeout for gremlin script execution(millisecond).
    channelizerorg.apache.tinkerpop.gremlin.server.channel.HttpChannelizerIndicates the protocol which the Gremlin Server provides service.
    authenticationauthenticator: com.baidu.hugegraph.auth.StandardAuthenticator, config: {tokens: conf/rest-server.properties}The authenticator and config(contains tokens path) of authentication mechanism.

    Rest Server & API 配置项

    对应配置文件rest-server.properties

    config optiondefault valuedescription
    graphs[hugegraph:conf/hugegraph.properties]The map of graphs’ name and config file.
    server.idserver-1The id of rest server, used for license verification.
    server.rolemasterThe role of nodes in the cluster, available types are [master, worker, computer]
    restserver.urlhttp://127.0.0.1:8080The url for listening of rest server.
    ssl.keystore_fileserver.keystoreThe path of server keystore file used when https protocol is enabled.
    ssl.keystore_passwordThe password of the path of the server keystore file used when the https protocol is enabled.
    restserver.max_worker_threads2 * CPUsThe maximum worker threads of rest server.
    restserver.min_free_memory64The minimum free memory(MB) of rest server, requests will be rejected when the available memory of system is lower than this value.
    restserver.request_timeout30The time in seconds within which a request must complete, -1 means no timeout.
    restserver.connection_idle_timeout30The time in seconds to keep an inactive connection alive, -1 means no timeout.
    restserver.connection_max_requests256The max number of HTTP requests allowed to be processed on one keep-alive connection, -1 means unlimited.
    gremlinserver.urlhttp://127.0.0.1:8182The url of gremlin server.
    gremlinserver.max_route8The max route number for gremlin server.
    gremlinserver.timeout30The timeout in seconds of waiting for gremlin server.
    batch.max_edges_per_batch500The maximum number of edges submitted per batch.
    batch.max_vertices_per_batch500The maximum number of vertices submitted per batch.
    batch.max_write_ratio50The maximum thread ratio for batch writing, only take effect if the batch.max_write_threads is 0.
    batch.max_write_threads0The maximum threads for batch writing, if the value is 0, the actual value will be set to batch.max_write_ratio * restserver.max_worker_threads.
    auth.authenticatorThe class path of authenticator implementation. e.g., com.baidu.hugegraph.auth.StandardAuthenticator, or com.baidu.hugegraph.auth.ConfigAuthenticator.
    auth.admin_token162f7848-0b6d-4faf-b557-3a0797869c55Token for administrator operations, only for com.baidu.hugegraph.auth.ConfigAuthenticator.
    auth.graph_storehugegraphThe name of graph used to store authentication information, like users, only for com.baidu.hugegraph.auth.StandardAuthenticator.
    auth.user_tokens[hugegraph:9fd95c9c-711b-415b-b85f-d4df46ba5c31]The map of user tokens with name and password, only for com.baidu.hugegraph.auth.ConfigAuthenticator.
    auth.audit_log_rate1000.0The max rate of audit log output per user, default value is 1000 records per second.
    auth.cache_capacity10240The max cache capacity of each auth cache item.
    auth.cache_expire600The expiration time in seconds of vertex cache.
    auth.remote_urlIf the address is empty, it provide auth service, otherwise it is auth client and also provide auth service through rpc forwarding. The remote url can be set to multiple addresses, which are concat by ‘,’.
    auth.token_expire86400The expiration time in seconds after token created
    auth.token_secretFXQXbJtbCLxODc6tGci732pkH1cyf8QgSecret key of HS256 algorithm.
    exception.allow_tracefalseWhether to allow exception trace stack.

    基本配置项

    基本配置项及后端配置项对应配置文件:{graph-name}.properties,如hugegraph.properties

    config optiondefault valuedescription
    gremlin.graphcom.baidu.hugegraph.HugeFactoryGremlin entrance to create graph.
    backendrocksdbThe data store type, available values are [memory, rocksdb, cassandra, scylladb, hbase, mysql].
    serializerbinaryThe serializer for backend store, available values are [text, binary, cassandra, hbase, mysql].
    storehugegraphThe database name like Cassandra Keyspace.
    store.connection_detect_interval600The interval in seconds for detecting connections, if the idle time of a connection exceeds this value, detect it and reconnect if needed before using, value 0 means detecting every time.
    store.graphgThe graph table name, which store vertex, edge and property.
    store.schemamThe schema table name, which store meta data.
    store.systemsThe system table name, which store system data.
    schema.illegal_name_regex.\s+$|~.The regex specified the illegal format for schema name.
    schema.cache_capacity10000The max cache size(items) of schema cache.
    vertex.cache_typel2The type of vertex cache, allowed values are [l1, l2].
    vertex.cache_capacity10000000The max cache size(items) of vertex cache.
    vertex.cache_expire600The expire time in seconds of vertex cache.
    vertex.check_customized_id_existfalseWhether to check the vertices exist for those using customized id strategy.
    vertex.default_labelvertexThe default vertex label.
    vertex.tx_capacity10000The max size(items) of vertices(uncommitted) in transaction.
    vertex.check_adjacent_vertex_existfalseWhether to check the adjacent vertices of edges exist.
    vertex.lazy_load_adjacent_vertextrueWhether to lazy load adjacent vertices of edges.
    vertex.part_edge_commit_size5000Whether to enable the mode to commit part of edges of vertex, enabled if commit size > 0, 0 means disabled.
    vertex.encode_primary_key_numbertrueWhether to encode number value of primary key in vertex id.
    vertex.remove_left_index_at_overwritefalseWhether remove left index at overwrite.
    edge.cache_typel2The type of edge cache, allowed values are [l1, l2].
    edge.cache_capacity1000000The max cache size(items) of edge cache.
    edge.cache_expire600The expiration time in seconds of edge cache.
    edge.tx_capacity10000The max size(items) of edges(uncommitted) in transaction.
    query.page_size500The size of each page when querying by paging.
    query.batch_size1000The size of each batch when querying by batch.
    query.ignore_invalid_datatrueWhether to ignore invalid data of vertex or edge.
    query.index_intersect_threshold1000The maximum number of intermediate results to intersect indexes when querying by multiple single index properties.
    query.ramtable_edges_capacity20000000The maximum number of edges in ramtable, include OUT and IN edges.
    query.ramtable_enablefalseWhether to enable ramtable for query of adjacent edges.
    query.ramtable_vertices_capacity10000000The maximum number of vertices in ramtable, generally the largest vertex id is used as capacity.
    query.optimize_aggregate_by_indexfalseWhether to optimize aggregate query(like count) by index.
    oltp.concurrent_depth10The min depth to enable concurrent oltp algorithm.
    oltp.concurrent_threads10Thread number to concurrently execute oltp algorithm.
    oltp.collection_typeECThe implementation type of collections used in oltp algorithm.
    rate_limit.read0The max rate(times/s) to execute query of vertices/edges.
    rate_limit.write0The max rate(items/s) to add/update/delete vertices/edges.
    task.wait_timeout10Timeout in seconds for waiting for the task to complete,such as when truncating or clearing the backend.
    task.input_size_limit16777216The job input size limit in bytes.
    task.result_size_limit16777216The job result size limit in bytes.
    task.sync_deletionfalseWhether to delete schema or expired data synchronously.
    task.ttl_delete_batch1The batch size used to delete expired data.
    computer.config/conf/computer.yamlThe config file path of computer job.
    search.text_analyzerikanalyzerChoose a text analyzer for searching the vertex/edge properties, available type are [word, ansj, hanlp, smartcn, jieba, jcseg, mmseg4j, ikanalyzer].
    search.text_analyzer_modesmartSpecify the mode for the text analyzer, the available mode of analyzer are {word: [MaximumMatching, ReverseMaximumMatching, MinimumMatching, ReverseMinimumMatching, BidirectionalMaximumMatching, BidirectionalMinimumMatching, BidirectionalMaximumMinimumMatching, FullSegmentation, MinimalWordCount, MaxNgramScore, PureEnglish], ansj: [BaseAnalysis, IndexAnalysis, ToAnalysis, NlpAnalysis], hanlp: [standard, nlp, index, nShort, shortest, speed], smartcn: [], jieba: [SEARCH, INDEX], jcseg: [Simple, Complex], mmseg4j: [Simple, Complex, MaxWord], ikanalyzer: [smart, max_word]}.
    snowflake.datecenter_id0The datacenter id of snowflake id generator.
    snowflake.force_stringfalseWhether to force the snowflake long id to be a string.
    snowflake.worker_id0The worker id of snowflake id generator.
    raft.modefalseWhether the backend storage works in raft mode.
    raft.safe_readfalseWhether to use linearly consistent read.
    raft.use_snapshotfalseWhether to use snapshot.
    raft.endpoint127.0.0.1:8281The peerid of current raft node.
    raft.group_peers127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283The peers of current raft group.
    raft.path./raft-logThe log path of current raft node.
    raft.use_replicator_pipelinetrueWhether to use replicator line, when turned on it multiple logs can be sent in parallel, and the next log doesn’t have to wait for the ack message of the current log to be sent.
    raft.election_timeout10000Timeout in milliseconds to launch a round of election.
    raft.snapshot_interval3600The interval in seconds to trigger snapshot save.
    raft.backend_threadscurrent CPU v-coresThe thread number used to apply task to backend.
    raft.read_index_threads8The thread number used to execute reading index.
    raft.apply_batch1The apply batch size to trigger disruptor event handler.
    raft.queue_size16384The disruptor buffers size for jraft RaftNode, StateMachine and LogManager.
    raft.queue_publish_timeout60The timeout in second when publish event into disruptor.
    raft.rpc_threads80The rpc threads for jraft RPC layer.
    raft.rpc_connect_timeout5000The rpc connect timeout for jraft rpc.
    raft.rpc_timeout60000The rpc timeout for jraft rpc.
    raft.rpc_buf_low_water_mark10485760The ChannelOutboundBuffer’s low water mark of netty, when buffer size less than this size, the method ChannelOutboundBuffer.isWritable() will return true, it means that low downstream pressure or good network.
    raft.rpc_buf_high_water_mark20971520The ChannelOutboundBuffer’s high water mark of netty, only when buffer size exceed this size, the method ChannelOutboundBuffer.isWritable() will return false, it means that the downstream pressure is too great to process the request or network is very congestion, upstream needs to limit rate at this time.
    raft.read_strategyReadOnlyLeaseBasedThe linearizability of read strategy.

    RPC server 配置

    config optiondefault valuedescription
    rpc.client_connect_timeout20The timeout(in seconds) of rpc client connect to rpc server.
    rpc.client_load_balancerconsistentHashThe rpc client uses a load-balancing algorithm to access multiple rpc servers in one cluster. Default value is ‘consistentHash’, means forwarding by request parameters.
    rpc.client_read_timeout40The timeout(in seconds) of rpc client read from rpc server.
    rpc.client_reconnect_period10The period(in seconds) of rpc client reconnect to rpc server.
    rpc.client_retries3Failed retry number of rpc client calls to rpc server.
    rpc.config_order999Sofa rpc configuration file loading order, the larger the more later loading.
    rpc.logger_implcom.alipay.sofa.rpc.log.SLF4JLoggerImplSofa rpc log implementation class.
    rpc.protocolboltRpc communication protocol, client and server need to be specified the same value.
    rpc.remote_urlThe remote urls of rpc peers, it can be set to multiple addresses, which are concat by ‘,’, empty value means not enabled.
    rpc.server_adaptive_portfalseWhether the bound port is adaptive, if it’s enabled, when the port is in use, automatically +1 to detect the next available port. Note that this process is not atomic, so there may still be port conflicts.
    rpc.server_hostThe hosts/ips bound by rpc server to provide services, empty value means not enabled.
    rpc.server_port8090The port bound by rpc server to provide services.
    rpc.server_timeout30The timeout(in seconds) of rpc server execution.

    Cassandra 后端配置项

    config optiondefault valuedescription
    backendMust be set to cassandra.
    serializerMust be set to cassandra.
    cassandra.hostlocalhostThe seeds hostname or ip address of cassandra cluster.
    cassandra.port9042The seeds port address of cassandra cluster.
    cassandra.connect_timeout5The cassandra driver connect server timeout(seconds).
    cassandra.read_timeout20The cassandra driver read from server timeout(seconds).
    cassandra.keyspace.strategySimpleStrategyThe replication strategy of keyspace, valid value is SimpleStrategy or NetworkTopologyStrategy.
    cassandra.keyspace.replication[3]The keyspace replication factor of SimpleStrategy, like ‘[3]’.Or replicas in each datacenter of NetworkTopologyStrategy, like ‘[dc1:2,dc2:1]’.
    cassandra.usernameThe username to use to login to cassandra cluster.
    cassandra.passwordThe password corresponding to cassandra.username.
    cassandra.compression_typenoneThe compression algorithm of cassandra transport: none/snappy/lz4.
    cassandra.jmx_port=71997199The port of JMX API service for cassandra.
    cassandra.aggregation_timeout43200The timeout in seconds of waiting for aggregation.

    ScyllaDB 后端配置项

    config optiondefault valuedescription
    backendMust be set to scylladb.
    serializerMust be set to scylladb.

    其它与 Cassandra 后端一致。

    RocksDB 后端配置项

    config optiondefault valuedescription
    backendMust be set to rocksdb.
    serializerMust be set to binary.
    rocksdb.data_disks[]The optimized disks for storing data of RocksDB. The format of each element: STORE/TABLE: /path/disk.Allowed keys are [g/vertex, g/edge_out, g/edge_in, g/vertex_label_index, g/edge_label_index, g/range_int_index, g/range_float_index, g/range_long_index, g/range_double_index, g/secondary_index, g/search_index, g/shard_index, g/unique_index, g/olap]
    rocksdb.data_pathrocksdb-dataThe path for storing data of RocksDB.
    rocksdb.wal_pathrocksdb-dataThe path for storing WAL of RocksDB.
    rocksdb.allow_mmap_readsfalseAllow the OS to mmap file for reading sst tables.
    rocksdb.allow_mmap_writesfalseAllow the OS to mmap file for writing.
    rocksdb.block_cache_capacity8388608The amount of block cache in bytes that will be used by RocksDB, 0 means no block cache.
    rocksdb.bloom_filter_bits_per_key-1The bits per key in bloom filter, a good value is 10, which yields a filter with ~ 1% false positive rate, -1 means no bloom filter.
    rocksdb.bloom_filter_block_based_modefalseUse block based filter rather than full filter.
    rocksdb.bloom_filter_whole_key_filteringtrueTrue if place whole keys in the bloom filter, else place the prefix of keys.
    rocksdb.bottommost_compressionNO_COMPRESSIONThe compression algorithm for the bottommost level of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
    rocksdb.bulkload_modefalseSwitch to the mode to bulk load data into RocksDB.
    rocksdb.cache_index_and_filter_blocksfalseIndicating if we’d put index/filter blocks to the block cache.
    rocksdb.compaction_styleLEVELSet compaction style for RocksDB: LEVEL/UNIVERSAL/FIFO.
    rocksdb.compressionSNAPPY_COMPRESSIONThe compression algorithm for compressing blocks of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
    rocksdb.compression_per_level[NO_COMPRESSION, NO_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION]The compression algorithms for different levels of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
    rocksdb.delayed_write_rate16777216The rate limit in bytes/s of user write requests when need to slow down if the compaction gets behind.
    rocksdb.log_levelINFOThe info log level of RocksDB.
    rocksdb.max_background_jobs8Maximum number of concurrent background jobs, including flushes and compactions.
    rocksdb.level_compaction_dynamic_level_bytesfalseWhether to enable level_compaction_dynamic_level_bytes, if it’s enabled we give max_bytes_for_level_multiplier a priority against max_bytes_for_level_base, the bytes of base level is dynamic for a more predictable LSM tree, it is useful to limit worse case space amplification. Turning this feature on/off for an existing DB can cause unexpected LSM tree structure so it’s not recommended.
    rocksdb.max_bytes_for_level_base536870912The upper-bound of the total size of level-1 files in bytes.
    rocksdb.max_bytes_for_level_multiplier10.0The ratio between the total size of level (L+1) files and the total size of level L files for all L.
    rocksdb.max_open_files-1The maximum number of open files that can be cached by RocksDB, -1 means no limit.
    rocksdb.max_subcompactions4The value represents the maximum number of threads per compaction job.
    rocksdb.max_write_buffer_number6The maximum number of write buffers that are built up in memory.
    rocksdb.max_write_buffer_number_to_maintain0The total maximum number of write buffers to maintain in memory.
    rocksdb.min_write_buffer_number_to_merge2The minimum number of write buffers that will be merged together.
    rocksdb.num_levels7Set the number of levels for this database.
    rocksdb.optimize_filters_for_hitsfalseThis flag allows us to not store filters for the last level.
    rocksdb.optimize_modetrueOptimize for heavy workloads and big datasets.
    rocksdb.pin_l0_filter_and_index_blocks_in_cachefalseIndicating if we’d put index/filter blocks to the block cache.
    rocksdb.sst_pathThe path for ingesting SST file into RocksDB.
    rocksdb.target_file_size_base67108864The target file size for compaction in bytes.
    rocksdb.target_file_size_multiplier1The size ratio between a level L file and a level (L+1) file.
    rocksdb.use_direct_io_for_flush_and_compactionfalseEnable the OS to use direct read/writes in flush and compaction.
    rocksdb.use_direct_readsfalseEnable the OS to use direct I/O for reading sst tables.
    rocksdb.write_buffer_size134217728Amount of data in bytes to build up in memory.
    rocksdb.max_manifest_file_size104857600The max size of manifest file in bytes.
    rocksdb.skip_stats_update_on_db_openfalseWhether to skip statistics update when opening the database, setting this flag true allows us to not update statistics.
    rocksdb.max_file_opening_threads16The max number of threads used to open files.
    rocksdb.max_total_wal_size0Total size of WAL files in bytes. Once WALs exceed this size, we will start forcing the flush of column families related, 0 means no limit.
    rocksdb.db_write_buffer_size0Total size of write buffers in bytes across all column families, 0 means no limit.
    rocksdb.delete_obsolete_files_period21600The periodicity in seconds when obsolete files get deleted, 0 means always do full purge.
    rocksdb.hard_pending_compaction_bytes_limit274877906944The hard limit to impose on pending compaction in bytes.
    rocksdb.level0_file_num_compaction_trigger2Number of files to trigger level-0 compaction.
    rocksdb.level0_slowdown_writes_trigger20Soft limit on number of level-0 files for slowing down writes.
    rocksdb.level0_stop_writes_trigger36Hard limit on number of level-0 files for stopping writes.
    rocksdb.soft_pending_compaction_bytes_limit68719476736The soft limit to impose on pending compaction in bytes.

    HBase 后端配置项

    config optiondefault valuedescription
    backendMust be set to hbase.
    serializerMust be set to hbase.
    hbase.hostslocalhostThe hostnames or ip addresses of HBase zookeeper, separated with commas.
    hbase.port2181The port address of HBase zookeeper.
    hbase.threads_max64The max threads num of hbase connections.
    hbase.znode_parent/hbaseThe znode parent path of HBase zookeeper.
    hbase.zk_retry3The recovery retry times of HBase zookeeper.
    hbase.aggregation_timeout43200The timeout in seconds of waiting for aggregation.
    hbase.kerberos_enablefalseIs Kerberos authentication enabled for HBase.
    hbase.kerberos_keytabThe HBase’s key tab file for kerberos authentication.
    hbase.kerberos_principalThe HBase’s principal for kerberos authentication.
    hbase.krb5_confetc/krb5.confKerberos configuration file, including KDC IP, default realm, etc.
    hbase.hbase_site/etc/hbase/conf/hbase-site.xmlThe HBase’s configuration file
    hbase.enable_partitiontrueIs pre-split partitions enabled for HBase.
    hbase.vertex_partitions10The number of partitions of the HBase vertex table.
    hbase.edge_partitions30The number of partitions of the HBase edge table.

    MySQL & PostgreSQL 后端配置项

    config optiondefault valuedescription
    backendMust be set to mysql.
    serializerMust be set to mysql.
    jdbc.drivercom.mysql.jdbc.DriverThe JDBC driver class to connect database.
    jdbc.urljdbc:mysql://127.0.0.1:3306The url of database in JDBC format.
    jdbc.usernamerootThe username to login database.
    jdbc.password******The password corresponding to jdbc.username.
    jdbc.ssl_modefalseThe SSL mode of connections with database.
    jdbc.reconnect_interval3The interval(seconds) between reconnections when the database connection fails.
    jdbc.reconnect_max_times3The reconnect times when the database connection fails.
    jdbc.storage_engineInnoDBThe storage engine of backend store database, like InnoDB/MyISAM/RocksDB for MySQL.
    jdbc.postgresql.connect_databasetemplate1The database used to connect when init store, drop store or check store exist.

    PostgreSQL 后端配置项

    config optiondefault valuedescription
    backendMust be set to postgresql.
    serializerMust be set to postgresql.

    其它与 MySQL 后端一致。

    PostgreSQL 后端的 driver 和 url 应该设置为:

    • jdbc.driver=org.postgresql.Driver
    • jdbc.url=jdbc:postgresql://localhost:5432/

    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    + Print entire section

    HugeGraph 配置项

    Gremlin Server 配置项

    对应配置文件gremlin-server.yaml

    config optiondefault valuedescription
    host127.0.0.1The host or ip of Gremlin Server.
    port8182The listening port of Gremlin Server.
    graphshugegraph: conf/hugegraph.propertiesThe map of graphs with name and config file path.
    scriptEvaluationTimeout30000The timeout for gremlin script execution(millisecond).
    channelizerorg.apache.tinkerpop.gremlin.server.channel.HttpChannelizerIndicates the protocol which the Gremlin Server provides service.
    authenticationauthenticator: com.baidu.hugegraph.auth.StandardAuthenticator, config: {tokens: conf/rest-server.properties}The authenticator and config(contains tokens path) of authentication mechanism.

    Rest Server & API 配置项

    对应配置文件rest-server.properties

    config optiondefault valuedescription
    graphs[hugegraph:conf/hugegraph.properties]The map of graphs’ name and config file.
    server.idserver-1The id of rest server, used for license verification.
    server.rolemasterThe role of nodes in the cluster, available types are [master, worker, computer]
    restserver.urlhttp://127.0.0.1:8080The url for listening of rest server.
    ssl.keystore_fileserver.keystoreThe path of server keystore file used when https protocol is enabled.
    ssl.keystore_passwordThe password of the path of the server keystore file used when the https protocol is enabled.
    restserver.max_worker_threads2 * CPUsThe maximum worker threads of rest server.
    restserver.min_free_memory64The minimum free memory(MB) of rest server, requests will be rejected when the available memory of system is lower than this value.
    restserver.request_timeout30The time in seconds within which a request must complete, -1 means no timeout.
    restserver.connection_idle_timeout30The time in seconds to keep an inactive connection alive, -1 means no timeout.
    restserver.connection_max_requests256The max number of HTTP requests allowed to be processed on one keep-alive connection, -1 means unlimited.
    gremlinserver.urlhttp://127.0.0.1:8182The url of gremlin server.
    gremlinserver.max_route8The max route number for gremlin server.
    gremlinserver.timeout30The timeout in seconds of waiting for gremlin server.
    batch.max_edges_per_batch500The maximum number of edges submitted per batch.
    batch.max_vertices_per_batch500The maximum number of vertices submitted per batch.
    batch.max_write_ratio50The maximum thread ratio for batch writing, only take effect if the batch.max_write_threads is 0.
    batch.max_write_threads0The maximum threads for batch writing, if the value is 0, the actual value will be set to batch.max_write_ratio * restserver.max_worker_threads.
    auth.authenticatorThe class path of authenticator implementation. e.g., com.baidu.hugegraph.auth.StandardAuthenticator, or com.baidu.hugegraph.auth.ConfigAuthenticator.
    auth.admin_token162f7848-0b6d-4faf-b557-3a0797869c55Token for administrator operations, only for com.baidu.hugegraph.auth.ConfigAuthenticator.
    auth.graph_storehugegraphThe name of graph used to store authentication information, like users, only for com.baidu.hugegraph.auth.StandardAuthenticator.
    auth.user_tokens[hugegraph:9fd95c9c-711b-415b-b85f-d4df46ba5c31]The map of user tokens with name and password, only for com.baidu.hugegraph.auth.ConfigAuthenticator.
    auth.audit_log_rate1000.0The max rate of audit log output per user, default value is 1000 records per second.
    auth.cache_capacity10240The max cache capacity of each auth cache item.
    auth.cache_expire600The expiration time in seconds of vertex cache.
    auth.remote_urlIf the address is empty, it provide auth service, otherwise it is auth client and also provide auth service through rpc forwarding. The remote url can be set to multiple addresses, which are concat by ‘,’.
    auth.token_expire86400The expiration time in seconds after token created
    auth.token_secretFXQXbJtbCLxODc6tGci732pkH1cyf8QgSecret key of HS256 algorithm.
    exception.allow_tracefalseWhether to allow exception trace stack.

    基本配置项

    基本配置项及后端配置项对应配置文件:{graph-name}.properties,如hugegraph.properties

    config optiondefault valuedescription
    gremlin.graphcom.baidu.hugegraph.HugeFactoryGremlin entrance to create graph.
    backendrocksdbThe data store type, available values are [memory, rocksdb, cassandra, scylladb, hbase, mysql].
    serializerbinaryThe serializer for backend store, available values are [text, binary, cassandra, hbase, mysql].
    storehugegraphThe database name like Cassandra Keyspace.
    store.connection_detect_interval600The interval in seconds for detecting connections, if the idle time of a connection exceeds this value, detect it and reconnect if needed before using, value 0 means detecting every time.
    store.graphgThe graph table name, which store vertex, edge and property.
    store.schemamThe schema table name, which store meta data.
    store.systemsThe system table name, which store system data.
    schema.illegal_name_regex.\s+$|~.The regex specified the illegal format for schema name.
    schema.cache_capacity10000The max cache size(items) of schema cache.
    vertex.cache_typel2The type of vertex cache, allowed values are [l1, l2].
    vertex.cache_capacity10000000The max cache size(items) of vertex cache.
    vertex.cache_expire600The expire time in seconds of vertex cache.
    vertex.check_customized_id_existfalseWhether to check the vertices exist for those using customized id strategy.
    vertex.default_labelvertexThe default vertex label.
    vertex.tx_capacity10000The max size(items) of vertices(uncommitted) in transaction.
    vertex.check_adjacent_vertex_existfalseWhether to check the adjacent vertices of edges exist.
    vertex.lazy_load_adjacent_vertextrueWhether to lazy load adjacent vertices of edges.
    vertex.part_edge_commit_size5000Whether to enable the mode to commit part of edges of vertex, enabled if commit size > 0, 0 means disabled.
    vertex.encode_primary_key_numbertrueWhether to encode number value of primary key in vertex id.
    vertex.remove_left_index_at_overwritefalseWhether remove left index at overwrite.
    edge.cache_typel2The type of edge cache, allowed values are [l1, l2].
    edge.cache_capacity1000000The max cache size(items) of edge cache.
    edge.cache_expire600The expiration time in seconds of edge cache.
    edge.tx_capacity10000The max size(items) of edges(uncommitted) in transaction.
    query.page_size500The size of each page when querying by paging.
    query.batch_size1000The size of each batch when querying by batch.
    query.ignore_invalid_datatrueWhether to ignore invalid data of vertex or edge.
    query.index_intersect_threshold1000The maximum number of intermediate results to intersect indexes when querying by multiple single index properties.
    query.ramtable_edges_capacity20000000The maximum number of edges in ramtable, include OUT and IN edges.
    query.ramtable_enablefalseWhether to enable ramtable for query of adjacent edges.
    query.ramtable_vertices_capacity10000000The maximum number of vertices in ramtable, generally the largest vertex id is used as capacity.
    query.optimize_aggregate_by_indexfalseWhether to optimize aggregate query(like count) by index.
    oltp.concurrent_depth10The min depth to enable concurrent oltp algorithm.
    oltp.concurrent_threads10Thread number to concurrently execute oltp algorithm.
    oltp.collection_typeECThe implementation type of collections used in oltp algorithm.
    rate_limit.read0The max rate(times/s) to execute query of vertices/edges.
    rate_limit.write0The max rate(items/s) to add/update/delete vertices/edges.
    task.wait_timeout10Timeout in seconds for waiting for the task to complete,such as when truncating or clearing the backend.
    task.input_size_limit16777216The job input size limit in bytes.
    task.result_size_limit16777216The job result size limit in bytes.
    task.sync_deletionfalseWhether to delete schema or expired data synchronously.
    task.ttl_delete_batch1The batch size used to delete expired data.
    computer.config/conf/computer.yamlThe config file path of computer job.
    search.text_analyzerikanalyzerChoose a text analyzer for searching the vertex/edge properties, available type are [word, ansj, hanlp, smartcn, jieba, jcseg, mmseg4j, ikanalyzer].
    search.text_analyzer_modesmartSpecify the mode for the text analyzer, the available mode of analyzer are {word: [MaximumMatching, ReverseMaximumMatching, MinimumMatching, ReverseMinimumMatching, BidirectionalMaximumMatching, BidirectionalMinimumMatching, BidirectionalMaximumMinimumMatching, FullSegmentation, MinimalWordCount, MaxNgramScore, PureEnglish], ansj: [BaseAnalysis, IndexAnalysis, ToAnalysis, NlpAnalysis], hanlp: [standard, nlp, index, nShort, shortest, speed], smartcn: [], jieba: [SEARCH, INDEX], jcseg: [Simple, Complex], mmseg4j: [Simple, Complex, MaxWord], ikanalyzer: [smart, max_word]}.
    snowflake.datecenter_id0The datacenter id of snowflake id generator.
    snowflake.force_stringfalseWhether to force the snowflake long id to be a string.
    snowflake.worker_id0The worker id of snowflake id generator.
    raft.modefalseWhether the backend storage works in raft mode.
    raft.safe_readfalseWhether to use linearly consistent read.
    raft.use_snapshotfalseWhether to use snapshot.
    raft.endpoint127.0.0.1:8281The peerid of current raft node.
    raft.group_peers127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283The peers of current raft group.
    raft.path./raft-logThe log path of current raft node.
    raft.use_replicator_pipelinetrueWhether to use replicator line, when turned on it multiple logs can be sent in parallel, and the next log doesn’t have to wait for the ack message of the current log to be sent.
    raft.election_timeout10000Timeout in milliseconds to launch a round of election.
    raft.snapshot_interval3600The interval in seconds to trigger snapshot save.
    raft.backend_threadscurrent CPU v-coresThe thread number used to apply task to backend.
    raft.read_index_threads8The thread number used to execute reading index.
    raft.apply_batch1The apply batch size to trigger disruptor event handler.
    raft.queue_size16384The disruptor buffers size for jraft RaftNode, StateMachine and LogManager.
    raft.queue_publish_timeout60The timeout in second when publish event into disruptor.
    raft.rpc_threads80The rpc threads for jraft RPC layer.
    raft.rpc_connect_timeout5000The rpc connect timeout for jraft rpc.
    raft.rpc_timeout60000The rpc timeout for jraft rpc.
    raft.rpc_buf_low_water_mark10485760The ChannelOutboundBuffer’s low water mark of netty, when buffer size less than this size, the method ChannelOutboundBuffer.isWritable() will return true, it means that low downstream pressure or good network.
    raft.rpc_buf_high_water_mark20971520The ChannelOutboundBuffer’s high water mark of netty, only when buffer size exceed this size, the method ChannelOutboundBuffer.isWritable() will return false, it means that the downstream pressure is too great to process the request or network is very congestion, upstream needs to limit rate at this time.
    raft.read_strategyReadOnlyLeaseBasedThe linearizability of read strategy.

    RPC server 配置

    config optiondefault valuedescription
    rpc.client_connect_timeout20The timeout(in seconds) of rpc client connect to rpc server.
    rpc.client_load_balancerconsistentHashThe rpc client uses a load-balancing algorithm to access multiple rpc servers in one cluster. Default value is ‘consistentHash’, means forwarding by request parameters.
    rpc.client_read_timeout40The timeout(in seconds) of rpc client read from rpc server.
    rpc.client_reconnect_period10The period(in seconds) of rpc client reconnect to rpc server.
    rpc.client_retries3Failed retry number of rpc client calls to rpc server.
    rpc.config_order999Sofa rpc configuration file loading order, the larger the more later loading.
    rpc.logger_implcom.alipay.sofa.rpc.log.SLF4JLoggerImplSofa rpc log implementation class.
    rpc.protocolboltRpc communication protocol, client and server need to be specified the same value.
    rpc.remote_urlThe remote urls of rpc peers, it can be set to multiple addresses, which are concat by ‘,’, empty value means not enabled.
    rpc.server_adaptive_portfalseWhether the bound port is adaptive, if it’s enabled, when the port is in use, automatically +1 to detect the next available port. Note that this process is not atomic, so there may still be port conflicts.
    rpc.server_hostThe hosts/ips bound by rpc server to provide services, empty value means not enabled.
    rpc.server_port8090The port bound by rpc server to provide services.
    rpc.server_timeout30The timeout(in seconds) of rpc server execution.

    Cassandra 后端配置项

    config optiondefault valuedescription
    backendMust be set to cassandra.
    serializerMust be set to cassandra.
    cassandra.hostlocalhostThe seeds hostname or ip address of cassandra cluster.
    cassandra.port9042The seeds port address of cassandra cluster.
    cassandra.connect_timeout5The cassandra driver connect server timeout(seconds).
    cassandra.read_timeout20The cassandra driver read from server timeout(seconds).
    cassandra.keyspace.strategySimpleStrategyThe replication strategy of keyspace, valid value is SimpleStrategy or NetworkTopologyStrategy.
    cassandra.keyspace.replication[3]The keyspace replication factor of SimpleStrategy, like ‘[3]’.Or replicas in each datacenter of NetworkTopologyStrategy, like ‘[dc1:2,dc2:1]’.
    cassandra.usernameThe username to use to login to cassandra cluster.
    cassandra.passwordThe password corresponding to cassandra.username.
    cassandra.compression_typenoneThe compression algorithm of cassandra transport: none/snappy/lz4.
    cassandra.jmx_port=71997199The port of JMX API service for cassandra.
    cassandra.aggregation_timeout43200The timeout in seconds of waiting for aggregation.

    ScyllaDB 后端配置项

    config optiondefault valuedescription
    backendMust be set to scylladb.
    serializerMust be set to scylladb.

    其它与 Cassandra 后端一致。

    RocksDB 后端配置项

    config optiondefault valuedescription
    backendMust be set to rocksdb.
    serializerMust be set to binary.
    rocksdb.data_disks[]The optimized disks for storing data of RocksDB. The format of each element: STORE/TABLE: /path/disk.Allowed keys are [g/vertex, g/edge_out, g/edge_in, g/vertex_label_index, g/edge_label_index, g/range_int_index, g/range_float_index, g/range_long_index, g/range_double_index, g/secondary_index, g/search_index, g/shard_index, g/unique_index, g/olap]
    rocksdb.data_pathrocksdb-dataThe path for storing data of RocksDB.
    rocksdb.wal_pathrocksdb-dataThe path for storing WAL of RocksDB.
    rocksdb.allow_mmap_readsfalseAllow the OS to mmap file for reading sst tables.
    rocksdb.allow_mmap_writesfalseAllow the OS to mmap file for writing.
    rocksdb.block_cache_capacity8388608The amount of block cache in bytes that will be used by RocksDB, 0 means no block cache.
    rocksdb.bloom_filter_bits_per_key-1The bits per key in bloom filter, a good value is 10, which yields a filter with ~ 1% false positive rate, -1 means no bloom filter.
    rocksdb.bloom_filter_block_based_modefalseUse block based filter rather than full filter.
    rocksdb.bloom_filter_whole_key_filteringtrueTrue if place whole keys in the bloom filter, else place the prefix of keys.
    rocksdb.bottommost_compressionNO_COMPRESSIONThe compression algorithm for the bottommost level of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
    rocksdb.bulkload_modefalseSwitch to the mode to bulk load data into RocksDB.
    rocksdb.cache_index_and_filter_blocksfalseIndicating if we’d put index/filter blocks to the block cache.
    rocksdb.compaction_styleLEVELSet compaction style for RocksDB: LEVEL/UNIVERSAL/FIFO.
    rocksdb.compressionSNAPPY_COMPRESSIONThe compression algorithm for compressing blocks of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
    rocksdb.compression_per_level[NO_COMPRESSION, NO_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION]The compression algorithms for different levels of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
    rocksdb.delayed_write_rate16777216The rate limit in bytes/s of user write requests when need to slow down if the compaction gets behind.
    rocksdb.log_levelINFOThe info log level of RocksDB.
    rocksdb.max_background_jobs8Maximum number of concurrent background jobs, including flushes and compactions.
    rocksdb.level_compaction_dynamic_level_bytesfalseWhether to enable level_compaction_dynamic_level_bytes, if it’s enabled we give max_bytes_for_level_multiplier a priority against max_bytes_for_level_base, the bytes of base level is dynamic for a more predictable LSM tree, it is useful to limit worse case space amplification. Turning this feature on/off for an existing DB can cause unexpected LSM tree structure so it’s not recommended.
    rocksdb.max_bytes_for_level_base536870912The upper-bound of the total size of level-1 files in bytes.
    rocksdb.max_bytes_for_level_multiplier10.0The ratio between the total size of level (L+1) files and the total size of level L files for all L.
    rocksdb.max_open_files-1The maximum number of open files that can be cached by RocksDB, -1 means no limit.
    rocksdb.max_subcompactions4The value represents the maximum number of threads per compaction job.
    rocksdb.max_write_buffer_number6The maximum number of write buffers that are built up in memory.
    rocksdb.max_write_buffer_number_to_maintain0The total maximum number of write buffers to maintain in memory.
    rocksdb.min_write_buffer_number_to_merge2The minimum number of write buffers that will be merged together.
    rocksdb.num_levels7Set the number of levels for this database.
    rocksdb.optimize_filters_for_hitsfalseThis flag allows us to not store filters for the last level.
    rocksdb.optimize_modetrueOptimize for heavy workloads and big datasets.
    rocksdb.pin_l0_filter_and_index_blocks_in_cachefalseIndicating if we’d put index/filter blocks to the block cache.
    rocksdb.sst_pathThe path for ingesting SST file into RocksDB.
    rocksdb.target_file_size_base67108864The target file size for compaction in bytes.
    rocksdb.target_file_size_multiplier1The size ratio between a level L file and a level (L+1) file.
    rocksdb.use_direct_io_for_flush_and_compactionfalseEnable the OS to use direct read/writes in flush and compaction.
    rocksdb.use_direct_readsfalseEnable the OS to use direct I/O for reading sst tables.
    rocksdb.write_buffer_size134217728Amount of data in bytes to build up in memory.
    rocksdb.max_manifest_file_size104857600The max size of manifest file in bytes.
    rocksdb.skip_stats_update_on_db_openfalseWhether to skip statistics update when opening the database, setting this flag true allows us to not update statistics.
    rocksdb.max_file_opening_threads16The max number of threads used to open files.
    rocksdb.max_total_wal_size0Total size of WAL files in bytes. Once WALs exceed this size, we will start forcing the flush of column families related, 0 means no limit.
    rocksdb.db_write_buffer_size0Total size of write buffers in bytes across all column families, 0 means no limit.
    rocksdb.delete_obsolete_files_period21600The periodicity in seconds when obsolete files get deleted, 0 means always do full purge.
    rocksdb.hard_pending_compaction_bytes_limit274877906944The hard limit to impose on pending compaction in bytes.
    rocksdb.level0_file_num_compaction_trigger2Number of files to trigger level-0 compaction.
    rocksdb.level0_slowdown_writes_trigger20Soft limit on number of level-0 files for slowing down writes.
    rocksdb.level0_stop_writes_trigger36Hard limit on number of level-0 files for stopping writes.
    rocksdb.soft_pending_compaction_bytes_limit68719476736The soft limit to impose on pending compaction in bytes.

    HBase 后端配置项

    config optiondefault valuedescription
    backendMust be set to hbase.
    serializerMust be set to hbase.
    hbase.hostslocalhostThe hostnames or ip addresses of HBase zookeeper, separated with commas.
    hbase.port2181The port address of HBase zookeeper.
    hbase.threads_max64The max threads num of hbase connections.
    hbase.znode_parent/hbaseThe znode parent path of HBase zookeeper.
    hbase.zk_retry3The recovery retry times of HBase zookeeper.
    hbase.aggregation_timeout43200The timeout in seconds of waiting for aggregation.
    hbase.kerberos_enablefalseIs Kerberos authentication enabled for HBase.
    hbase.kerberos_keytabThe HBase’s key tab file for kerberos authentication.
    hbase.kerberos_principalThe HBase’s principal for kerberos authentication.
    hbase.krb5_confetc/krb5.confKerberos configuration file, including KDC IP, default realm, etc.
    hbase.hbase_site/etc/hbase/conf/hbase-site.xmlThe HBase’s configuration file
    hbase.enable_partitiontrueIs pre-split partitions enabled for HBase.
    hbase.vertex_partitions10The number of partitions of the HBase vertex table.
    hbase.edge_partitions30The number of partitions of the HBase edge table.

    MySQL & PostgreSQL 后端配置项

    config optiondefault valuedescription
    backendMust be set to mysql.
    serializerMust be set to mysql.
    jdbc.drivercom.mysql.jdbc.DriverThe JDBC driver class to connect database.
    jdbc.urljdbc:mysql://127.0.0.1:3306The url of database in JDBC format.
    jdbc.usernamerootThe username to login database.
    jdbc.password******The password corresponding to jdbc.username.
    jdbc.ssl_modefalseThe SSL mode of connections with database.
    jdbc.reconnect_interval3The interval(seconds) between reconnections when the database connection fails.
    jdbc.reconnect_max_times3The reconnect times when the database connection fails.
    jdbc.storage_engineInnoDBThe storage engine of backend store database, like InnoDB/MyISAM/RocksDB for MySQL.
    jdbc.postgresql.connect_databasetemplate1The database used to connect when init store, drop store or check store exist.

    PostgreSQL 后端配置项

    config optiondefault valuedescription
    backendMust be set to postgresql.
    serializerMust be set to postgresql.

    其它与 MySQL 后端一致。

    PostgreSQL 后端的 driver 和 url 应该设置为:

    • jdbc.driver=org.postgresql.Driver
    • jdbc.url=jdbc:postgresql://localhost:5432/

    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    diff --git a/cn/docs/config/index.html b/cn/docs/config/index.html index 0aa6c25e9..67e1fc44f 100644 --- a/cn/docs/config/index.html +++ b/cn/docs/config/index.html @@ -4,7 +4,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    Config


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    Config


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/download/download/index.html b/cn/docs/download/download/index.html index 2f945200f..b09197224 100644 --- a/cn/docs/download/download/index.html +++ b/cn/docs/download/download/index.html @@ -21,7 +21,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    Download HugeGraph

    Latest version

    The latest HugeGraph: 0.12.0, released on 2021-12-31.

    componentsdescriptiondownload
    HugeGraph-ServerHugeGraph的主程序0.12.0
    HugeGraph-Hubble基于Web的可视化图形界面1.6.0
    HugeGraph-Loader数据导入工具0.12.0
    HugeGraph-Tools命令行工具集1.6.0

    Versions mapping

    serverclientloaderhubblecommontools
    0.12.02.0.10.12.01.6.02.0.11.6.0
    0.11.21.9.10.11.11.5.01.8.11.5.0
    0.10.41.8.00.10.10.10.01.6.161.4.0
    0.9.21.7.00.9.00.9.01.6.01.3.0
    0.8.01.6.40.8.00.8.01.5.31.2.0
    0.7.41.5.80.7.00.7.01.4.91.1.0
    0.6.11.5.60.6.10.6.11.4.31.0.0
    0.5.61.5.00.5.60.5.01.4.0
    0.4.51.4.70.2.20.4.11.3.12

    说明:最新的图分析和展示平台为 hubble,支持 0.10 及之后的 server 版本;studio 为 server 0.10.x 以及之前的版本的图分析和展示平台,其功能从 0.10 起不再更新。

    Release Notes


    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    + Print entire section

    Download HugeGraph

    Latest version

    The latest HugeGraph: 0.12.0, released on 2021-12-31.

    componentsdescriptiondownload
    HugeGraph-ServerHugeGraph的主程序0.12.0
    HugeGraph-Hubble基于Web的可视化图形界面1.6.0
    HugeGraph-Loader数据导入工具0.12.0
    HugeGraph-Tools命令行工具集1.6.0

    Versions mapping

    serverclientloaderhubblecommontools
    0.12.02.0.10.12.01.6.02.0.11.6.0
    0.11.21.9.10.11.11.5.01.8.11.5.0
    0.10.41.8.00.10.10.10.01.6.161.4.0
    0.9.21.7.00.9.00.9.01.6.01.3.0
    0.8.01.6.40.8.00.8.01.5.31.2.0
    0.7.41.5.80.7.00.7.01.4.91.1.0
    0.6.11.5.60.6.10.6.11.4.31.0.0
    0.5.61.5.00.5.60.5.01.4.0
    0.4.51.4.70.2.20.4.11.3.12

    说明:最新的图分析和展示平台为 hubble,支持 0.10 及之后的 server 版本;studio 为 server 0.10.x 以及之前的版本的图分析和展示平台,其功能从 0.10 起不再更新。

    Release Notes


    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    diff --git a/cn/docs/guides/_print/index.html b/cn/docs/guides/_print/index.html index 6511b9452..8f70db650 100644 --- a/cn/docs/guides/_print/index.html +++ b/cn/docs/guides/_print/index.html @@ -306,7 +306,7 @@ # | %23 & | %26 = | %3D -
  • 查询某一类别的顶点或边(query by label)时提示超时

    由于属于某一label的数据量可能比较多,请加上limit限制。

  • 通过RESTful API操作图是可以的,但是发送Gremlin语句就报错:Request Failed(500)

    可能是GremlinServer的配置有误,检查gremlin-server.yamlhostport是否与rest-server.propertiesgremlinserver.url匹配,如不匹配则修改,然后重启服务。

  • 使用Loader导数据出现Socket Timeout异常,然后导致Loader中断

    持续地导入数据会使Server的压力过大,然后导致有些请求超时。可以通过调整Loader的参数来适当缓解Server压力(如:重试次数,重试间隔,错误容忍数等),降低该问题出现频率。

  • 如何删除全部的顶点和边,RESTful API中没有这样的接口,调用gremling.V().drop()会报错Vertices in transaction have reached capacity xxx

    目前确实没有好办法删除全部的数据,用户如果是自己部署的Server和后端,可以直接清空数据库,重启Server。可以使用paging API或scan API先获取所有数据,再逐条删除。

  • 清空了数据库,并且执行了init-store,但是添加schema时提示"xxx has existed"

    HugeGraphServer内是有缓存的,清空数据库的同时是需要重启Server的,否则残留的缓存会产生不一致。

  • 插入顶点或边的过程中报错:Id max length is 128, but got xxx {yyy}Big id max length is 32768, but got xxx

    为了保证查询性能,目前的后端存储对id列的长度做了限制,顶点id不能超过128字节,边id长度不能超过32768字节,索引id不能超过128字节。

  • 是否支持嵌套属性,如果不支持,是否有什么替代方案

    嵌套属性目前暂不支持。替代方案:可以把嵌套属性作为单独的顶点拿出来,然后用边连接起来。

  • 一个EdgeLabel是否可以连接多对VertexLabel,比如"投资"关系,可以是"个人"投资"企业",也可以是"企业"投资"企业"

    一个EdgeLabel不支持连接多对VertexLabel,需要用户将EdgeLabel拆分得更细一点,如:“个人投资”,“企业投资”。

  • 通过RestAPI发送请求时提示HTTP 415 Unsupported Media Type

    请求头中需要指定Content-Type:application/json

  • 其他问题可以在对应项目的 issue 区搜索,例如 Server-Issues / Loader Issues

    +
  • 查询某一类别的顶点或边(query by label)时提示超时

    由于属于某一label的数据量可能比较多,请加上limit限制。

  • 通过RESTful API操作图是可以的,但是发送Gremlin语句就报错:Request Failed(500)

    可能是GremlinServer的配置有误,检查gremlin-server.yamlhostport是否与rest-server.propertiesgremlinserver.url匹配,如不匹配则修改,然后重启服务。

  • 使用Loader导数据出现Socket Timeout异常,然后导致Loader中断

    持续地导入数据会使Server的压力过大,然后导致有些请求超时。可以通过调整Loader的参数来适当缓解Server压力(如:重试次数,重试间隔,错误容忍数等),降低该问题出现频率。

  • 如何删除全部的顶点和边,RESTful API中没有这样的接口,调用gremling.V().drop()会报错Vertices in transaction have reached capacity xxx

    目前确实没有好办法删除全部的数据,用户如果是自己部署的Server和后端,可以直接清空数据库,重启Server。可以使用paging API或scan API先获取所有数据,再逐条删除。

  • 清空了数据库,并且执行了init-store,但是添加schema时提示"xxx has existed"

    HugeGraphServer内是有缓存的,清空数据库的同时是需要重启Server的,否则残留的缓存会产生不一致。

  • 插入顶点或边的过程中报错:Id max length is 128, but got xxx {yyy}Big id max length is 32768, but got xxx

    为了保证查询性能,目前的后端存储对id列的长度做了限制,顶点id不能超过128字节,边id长度不能超过32768字节,索引id不能超过128字节。

  • 是否支持嵌套属性,如果不支持,是否有什么替代方案

    嵌套属性目前暂不支持。替代方案:可以把嵌套属性作为单独的顶点拿出来,然后用边连接起来。

  • 一个EdgeLabel是否可以连接多对VertexLabel,比如"投资"关系,可以是"个人"投资"企业",也可以是"企业"投资"企业"

    一个EdgeLabel不支持连接多对VertexLabel,需要用户将EdgeLabel拆分得更细一点,如:“个人投资”,“企业投资”。

  • 通过RestAPI发送请求时提示HTTP 415 Unsupported Media Type

    请求头中需要指定Content-Type:application/json

  • 其他问题可以在对应项目的 issue 区搜索,例如 Server-Issues / Loader Issues

    diff --git a/cn/docs/guides/architectural/index.html b/cn/docs/guides/architectural/index.html index 44dcd146d..f56e77f4b 100644 --- a/cn/docs/guides/architectural/index.html +++ b/cn/docs/guides/architectural/index.html @@ -11,7 +11,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph Architecture Overview

    1 概述

    作为一款通用的图数据库产品,HugeGraph需具备图数据的基本功能,如下图所示。HugeGraph包括三个层次的功能,分别是存储层、计算层和用户接口层。 HugeGraph支持OLTP和OLAP两种图计算类型,其中OLTP实现了Apache TinkerPop3框架,并支持Gremlin查询语言。 OLAP计算是基于SparkGraphX实现。

    image

    2 组件

    HugeGraph的主要功能分为HugeCore、ApiServer、HugeGraph-Client、HugeGraph-Loader和HugeGraph-Studio等组件构成,各组件之间的通信关系如下图所示。

    image

    Last modified November 27, 2022: Add HugeGraph-Computer Doc (#155) (19ab2ff)
    + Print entire section

    HugeGraph Architecture Overview

    1 概述

    作为一款通用的图数据库产品,HugeGraph需具备图数据的基本功能,如下图所示。HugeGraph包括三个层次的功能,分别是存储层、计算层和用户接口层。 HugeGraph支持OLTP和OLAP两种图计算类型,其中OLTP实现了Apache TinkerPop3框架,并支持Gremlin查询语言。 OLAP计算是基于SparkGraphX实现。

    image

    2 组件

    HugeGraph的主要功能分为HugeCore、ApiServer、HugeGraph-Client、HugeGraph-Loader和HugeGraph-Studio等组件构成,各组件之间的通信关系如下图所示。

    image

    Last modified November 27, 2022: Add HugeGraph-Computer Doc (#155) (19ab2ff)
    diff --git a/cn/docs/guides/backup-restore/index.html b/cn/docs/guides/backup-restore/index.html index caf43a52a..9b0ed122b 100644 --- a/cn/docs/guides/backup-restore/index.html +++ b/cn/docs/guides/backup-restore/index.html @@ -49,7 +49,7 @@
    Response Body
    {
         "mode": "RESTORING"
     }
    -

    Last modified April 17, 2022: rebuild doc (ef36544)
    +
    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/guides/custom-plugin/index.html b/cn/docs/guides/custom-plugin/index.html index 67e056f95..7e9fafe8e 100644 --- a/cn/docs/guides/custom-plugin/index.html +++ b/cn/docs/guides/custom-plugin/index.html @@ -213,7 +213,7 @@ } }

    4. 配置SPI入口

    1. 确保services目录存在:hugegraph-plugin-demo/resources/META-INF/services
    2. 在services目录下建立文本文件:com.baidu.hugegraph.plugin.HugeGraphPlugin
    3. 文件内容如下:com.baidu.hugegraph.plugin.DemoPlugin

    5. 打Jar包

    通过maven打包,在项目目录下执行命令mvn package,在target目录下会生成Jar包文件。 -使用时将该Jar包拷到plugins目录,重启服务即可生效。


    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    +使用时将该Jar包拷到plugins目录,重启服务即可生效。


    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    diff --git a/cn/docs/guides/desgin-concept/index.html b/cn/docs/guides/desgin-concept/index.html index 3dfad7f6d..a22f4c9f9 100644 --- a/cn/docs/guides/desgin-concept/index.html +++ b/cn/docs/guides/desgin-concept/index.html @@ -115,7 +115,7 @@ assert !graph.vertices().hasNext(); assert !graph.edges().hasNext(); } -
    事务实现原理
    注意

    RESTful API暂时未暴露事务接口

    TinkerPop API允许打开事务,请求完成时会自动关闭(Gremlin Server强制关闭)


    Last modified April 17, 2022: rebuild doc (ef36544)
    +
    事务实现原理
    注意

    RESTful API暂时未暴露事务接口

    TinkerPop API允许打开事务,请求完成时会自动关闭(Gremlin Server强制关闭)


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/guides/faq/index.html b/cn/docs/guides/faq/index.html index 483501998..9772a2f7e 100644 --- a/cn/docs/guides/faq/index.html +++ b/cn/docs/guides/faq/index.html @@ -81,7 +81,7 @@ # | %23 & | %26 = | %3D -
  • 查询某一类别的顶点或边(query by label)时提示超时

    由于属于某一label的数据量可能比较多,请加上limit限制。

  • 通过RESTful API操作图是可以的,但是发送Gremlin语句就报错:Request Failed(500)

    可能是GremlinServer的配置有误,检查gremlin-server.yamlhostport是否与rest-server.propertiesgremlinserver.url匹配,如不匹配则修改,然后重启服务。

  • 使用Loader导数据出现Socket Timeout异常,然后导致Loader中断

    持续地导入数据会使Server的压力过大,然后导致有些请求超时。可以通过调整Loader的参数来适当缓解Server压力(如:重试次数,重试间隔,错误容忍数等),降低该问题出现频率。

  • 如何删除全部的顶点和边,RESTful API中没有这样的接口,调用gremling.V().drop()会报错Vertices in transaction have reached capacity xxx

    目前确实没有好办法删除全部的数据,用户如果是自己部署的Server和后端,可以直接清空数据库,重启Server。可以使用paging API或scan API先获取所有数据,再逐条删除。

  • 清空了数据库,并且执行了init-store,但是添加schema时提示"xxx has existed"

    HugeGraphServer内是有缓存的,清空数据库的同时是需要重启Server的,否则残留的缓存会产生不一致。

  • 插入顶点或边的过程中报错:Id max length is 128, but got xxx {yyy}Big id max length is 32768, but got xxx

    为了保证查询性能,目前的后端存储对id列的长度做了限制,顶点id不能超过128字节,边id长度不能超过32768字节,索引id不能超过128字节。

  • 是否支持嵌套属性,如果不支持,是否有什么替代方案

    嵌套属性目前暂不支持。替代方案:可以把嵌套属性作为单独的顶点拿出来,然后用边连接起来。

  • 一个EdgeLabel是否可以连接多对VertexLabel,比如"投资"关系,可以是"个人"投资"企业",也可以是"企业"投资"企业"

    一个EdgeLabel不支持连接多对VertexLabel,需要用户将EdgeLabel拆分得更细一点,如:“个人投资”,“企业投资”。

  • 通过RestAPI发送请求时提示HTTP 415 Unsupported Media Type

    请求头中需要指定Content-Type:application/json

  • 其他问题可以在对应项目的 issue 区搜索,例如 Server-Issues / Loader Issues


    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    +
  • 查询某一类别的顶点或边(query by label)时提示超时

    由于属于某一label的数据量可能比较多,请加上limit限制。

  • 通过RESTful API操作图是可以的,但是发送Gremlin语句就报错:Request Failed(500)

    可能是GremlinServer的配置有误,检查gremlin-server.yamlhostport是否与rest-server.propertiesgremlinserver.url匹配,如不匹配则修改,然后重启服务。

  • 使用Loader导数据出现Socket Timeout异常,然后导致Loader中断

    持续地导入数据会使Server的压力过大,然后导致有些请求超时。可以通过调整Loader的参数来适当缓解Server压力(如:重试次数,重试间隔,错误容忍数等),降低该问题出现频率。

  • 如何删除全部的顶点和边,RESTful API中没有这样的接口,调用gremling.V().drop()会报错Vertices in transaction have reached capacity xxx

    目前确实没有好办法删除全部的数据,用户如果是自己部署的Server和后端,可以直接清空数据库,重启Server。可以使用paging API或scan API先获取所有数据,再逐条删除。

  • 清空了数据库,并且执行了init-store,但是添加schema时提示"xxx has existed"

    HugeGraphServer内是有缓存的,清空数据库的同时是需要重启Server的,否则残留的缓存会产生不一致。

  • 插入顶点或边的过程中报错:Id max length is 128, but got xxx {yyy}Big id max length is 32768, but got xxx

    为了保证查询性能,目前的后端存储对id列的长度做了限制,顶点id不能超过128字节,边id长度不能超过32768字节,索引id不能超过128字节。

  • 是否支持嵌套属性,如果不支持,是否有什么替代方案

    嵌套属性目前暂不支持。替代方案:可以把嵌套属性作为单独的顶点拿出来,然后用边连接起来。

  • 一个EdgeLabel是否可以连接多对VertexLabel,比如"投资"关系,可以是"个人"投资"企业",也可以是"企业"投资"企业"

    一个EdgeLabel不支持连接多对VertexLabel,需要用户将EdgeLabel拆分得更细一点,如:“个人投资”,“企业投资”。

  • 通过RestAPI发送请求时提示HTTP 415 Unsupported Media Type

    请求头中需要指定Content-Type:application/json

  • 其他问题可以在对应项目的 issue 区搜索,例如 Server-Issues / Loader Issues


    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    diff --git a/cn/docs/guides/index.html b/cn/docs/guides/index.html index 014abd971..cea9a144f 100644 --- a/cn/docs/guides/index.html +++ b/cn/docs/guides/index.html @@ -4,7 +4,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    GUIDES


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    GUIDES


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/index.html b/cn/docs/index.html index 894be47df..955210800 100644 --- a/cn/docs/index.html +++ b/cn/docs/index.html @@ -5,7 +5,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    Documentation

    欢迎阅读HugeGraph文档


    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    + Print entire section

    Documentation

    欢迎阅读HugeGraph文档


    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    diff --git a/cn/docs/introduction/readme/index.html b/cn/docs/introduction/readme/index.html index f3d6d70dd..fbc901bf7 100644 --- a/cn/docs/introduction/readme/index.html +++ b/cn/docs/introduction/readme/index.html @@ -30,7 +30,7 @@ 具备完善的工具链组件,助力用户轻松构建基于图数据库之上的应用和产品。HugeGraph支持百亿以上的顶点和边快速导入,并提供毫秒级的关联关系查询能力(OLTP), 并支持大规模分布式图分析(OLAP)。

    HugeGraph典型应用场景包括深度关系探索、关联分析、路径搜索、特征抽取、数据聚类、社区检测、 知识图谱等,适用业务领域有如网络安全、电信诈骗、金融风控、广告推荐、社交网络和智能机器人等。

    本系统的主要应用场景是解决反欺诈、威胁情报、黑产打击等业务的图数据存储和建模分析需求,在此基础上逐步扩展及支持了更多的通用图应用。

    Features

    HugeGraph支持在线及离线环境下的图操作,支持批量导入数据,支持高效的复杂关联关系分析,并且能够与大数据平台无缝集成。 -HugeGraph支持多用户并行操作,用户可输入Gremlin查询语句,并及时得到图查询结果,也可在用户程序中调用HugeGraph API进行图分析或查询。

    本系统具备如下特点:

    本系统的功能包括但不限于:

    Modules

    Contact Us


    Last modified November 27, 2022: Update README.md (3ee3ae9)
    +HugeGraph支持多用户并行操作,用户可输入Gremlin查询语句,并及时得到图查询结果,也可在用户程序中调用HugeGraph API进行图分析或查询。

    本系统具备如下特点:

    本系统的功能包括但不限于:

    Modules

    Contact Us


    Last modified November 27, 2022: Update README.md (3ee3ae9)
    diff --git a/cn/docs/language/_print/index.html b/cn/docs/language/_print/index.html index 8cd33b922..f28991fed 100644 --- a/cn/docs/language/_print/index.html +++ b/cn/docs/language/_print/index.html @@ -69,7 +69,7 @@ // what is the name of the brother and the name of the place? g.V(pluto).out('brother').as('god').out('lives').as('place').select('god','place').by('name') -

    推荐使用HugeGraph-Studio 通过可视化的方式来执行上述代码。另外也可以通过HugeGraph-Client、HugeApi、GremlinConsole和GremlinDriver等多种方式执行上述代码。

    3.2 总结

    HugeGraph 目前支持 Gremlin 的语法,用户可以通过 Gremlin / REST-API 实现各种查询需求。

    +

    推荐使用HugeGraph-Studio 通过可视化的方式来执行上述代码。另外也可以通过HugeGraph-Client、HugeApi、GremlinConsole和GremlinDriver等多种方式执行上述代码。

    3.2 总结

    HugeGraph 目前支持 Gremlin 的语法,用户可以通过 Gremlin / REST-API 实现各种查询需求。

    diff --git a/cn/docs/language/hugegraph-example/index.html b/cn/docs/language/hugegraph-example/index.html index 96c8d44e2..085fe8953 100644 --- a/cn/docs/language/hugegraph-example/index.html +++ b/cn/docs/language/hugegraph-example/index.html @@ -97,7 +97,7 @@ // what is the name of the brother and the name of the place? g.V(pluto).out('brother').as('god').out('lives').as('place').select('god','place').by('name') -

    推荐使用HugeGraph-Studio 通过可视化的方式来执行上述代码。另外也可以通过HugeGraph-Client、HugeApi、GremlinConsole和GremlinDriver等多种方式执行上述代码。

    3.2 总结

    HugeGraph 目前支持 Gremlin 的语法,用户可以通过 Gremlin / REST-API 实现各种查询需求。


    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    +

    推荐使用HugeGraph-Studio 通过可视化的方式来执行上述代码。另外也可以通过HugeGraph-Client、HugeApi、GremlinConsole和GremlinDriver等多种方式执行上述代码。

    3.2 总结

    HugeGraph 目前支持 Gremlin 的语法,用户可以通过 Gremlin / REST-API 实现各种查询需求。


    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    diff --git a/cn/docs/language/hugegraph-gremlin/index.html b/cn/docs/language/hugegraph-gremlin/index.html index 1facda0c4..3aa5de95a 100644 --- a/cn/docs/language/hugegraph-gremlin/index.html +++ b/cn/docs/language/hugegraph-gremlin/index.html @@ -18,7 +18,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph Gremlin

    概述

    HugeGraph支持Apache TinkerPop3的图形遍历查询语言Gremlin。 SQL是关系型数据库查询语言,而Gremlin是一种通用的图数据库查询语言,Gremlin可用于创建图的实体(Vertex和Edge)、修改实体内部属性、删除实体,也可执行图的查询操作。

    Gremlin可用于创建图的实体(Vertex和Edge)、修改实体内部属性、删除实体,更主要的是可用于执行图的查询及分析操作。

    TinkerPop Features

    HugeGraph实现了TinkerPop框架,但是并没有实现TinkerPop所有的特性。

    下表列出HugeGraph对TinkerPop各种特性的支持情况:

    Graph Features

    NameDescriptionSupport
    ComputerDetermines if the {@code Graph} implementation supports {@link GraphComputer} based processingfalse
    TransactionsDetermines if the {@code Graph} implementations supports transactions.true
    PersistenceDetermines if the {@code Graph} implementation supports persisting it’s contents natively to disk.This feature does not refer to every graph’s ability to write to disk via the Gremlin IO packages(.e.g. GraphML), unless the graph natively persists to disk via those options somehow. For example,TinkerGraph does not support this feature as it is a pure in-sideEffects graph.true
    ThreadedTransactionsDetermines if the {@code Graph} implementation supports threaded transactions which allow a transaction be executed across multiple threads via {@link Transaction#createThreadedTx()}.false
    ConcurrentAccessDetermines if the {@code Graph} implementation supports more than one connection to the same instance at the same time. For example, Neo4j embedded does not support this feature because concurrent access to the same database files by multiple instances is not possible. However, Neo4j HA could support this feature as each new {@code Graph} instance coordinates with the Neo4j cluster allowing multiple instances to operate on the same database.false

    Vertex Features

    NameDescriptionSupport
    UserSuppliedIdsDetermines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier data type that the {@link Graph} will accept.false
    NumericIdsDetermines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    StringIdsDetermines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    UuidIdsDetermines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    CustomIdsDetermines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB’s {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    AnyIdsDetermines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}.false
    AddPropertyDetermines if an {@link Element} allows properties to be added. This feature is set independently from supporting “data types” and refers to support of calls to {@link Element#property(String, Object)}.true
    RemovePropertyDetermines if an {@link Element} allows properties to be removed.true
    AddVerticesDetermines if a {@link Vertex} can be added to the {@code Graph}.true
    MultiPropertiesDetermines if a {@link Vertex} can support multiple properties with the same key.false
    DuplicateMultiPropertiesDetermines if a {@link Vertex} can support non-unique values on the same key. For this value to be {@code true}, then {@link #supportsMetaProperties()} must also return true. By default this method, just returns what {@link #supportsMultiProperties()} returns.false
    MetaPropertiesDetermines if a {@link Vertex} can support properties on vertex properties. It is assumed that a graph will support all the same data types for meta-properties that are supported for regular properties.false
    RemoveVerticesDetermines if a {@link Vertex} can be removed from the {@code Graph}.true

    Edge Features

    NameDescriptionSupport
    UserSuppliedIdsDetermines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier data type that the {@link Graph} will accept.false
    NumericIdsDetermines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    StringIdsDetermines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    UuidIdsDetermines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    CustomIdsDetermines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB’s {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    AnyIdsDetermines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}.false
    AddPropertyDetermines if an {@link Element} allows properties to be added. This feature is set independently from supporting “data types” and refers to support of calls to {@link Element#property(String, Object)}.true
    RemovePropertyDetermines if an {@link Element} allows properties to be removed.true
    AddEdgesDetermines if an {@link Edge} can be added to a {@code Vertex}.true
    RemoveEdgesDetermines if an {@link Edge} can be removed from a {@code Vertex}.true

    Data Type Features

    NameDescriptionSupport
    BooleanValuestrue
    ByteValuestrue
    DoubleValuestrue
    FloatValuestrue
    IntegerValuestrue
    LongValuestrue
    MapValuesSupports setting of a {@code Map} value. The assumption is that the {@code Map} can contain arbitrary serializable values that may or may not be defined as a feature itselffalse
    MixedListValuesSupports setting of a {@code List} value. The assumption is that the {@code List} can contain arbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is “mixed” it does not need to contain objects of the same type.false
    BooleanArrayValuesfalse
    ByteArrayValuestrue
    DoubleArrayValuesfalse
    FloatArrayValuesfalse
    IntegerArrayValuesfalse
    LongArrayValuesfalse
    SerializableValuesfalse
    StringArrayValuesfalse
    StringValuestrue
    UniformListValuesSupports setting of a {@code List} value. The assumption is that the {@code List} can contain arbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is “uniform” it must contain objects of the same type.false

    Gremlin的步骤

    HugeGraph支持Gremlin的所有步骤。有关Gremlin的完整参考信息,请参与Gremlin官网

    步骤说明文档
    addE在两个顶点之间添加边addE step
    addV将顶点添加到图形addV step
    and确保所有遍历都返回值and step
    as用于向步骤的输出分配变量的步骤调制器as step
    bygrouporder配合使用的步骤调制器by step
    coalesce返回第一个返回结果的遍历coalesce step
    constant返回常量值。 与coalesce配合使用constant step
    count从遍历返回计数count step
    dedup返回已删除重复内容的值dedup step
    drop丢弃值(顶点/边缘)drop step
    fold充当用于计算结果聚合值的屏障fold step
    group根据指定的标签将值分组group step
    has用于筛选属性、顶点和边缘。 支持hasLabelhasIdhasNothas 变体has step
    inject将值注入流中inject step
    is用于通过布尔表达式执行筛选器is step
    limit用于限制遍历中的项数limit step
    local本地包装遍历的某个部分,类似于子查询local step
    not用于生成筛选器的求反结果not step
    optional如果生成了某个结果,则返回指定遍历的结果,否则返回调用元素optional step
    or确保至少有一个遍历会返回值or step
    order按指定的排序顺序返回结果order step
    path返回遍历的完整路径path step
    project将属性投影为映射project step
    properties返回指定标签的属性properties step
    range根据指定的值范围进行筛选range step
    repeat将步骤重复指定的次数。 用于循环repeat step
    sample用于对遍历返回的结果采样sample step
    select用于投影遍历返回的结果select step
    store用于遍历返回的非阻塞聚合store step
    tree将顶点中的路径聚合到树中tree step
    unfold将迭代器作为步骤展开unfold step
    union合并多个遍历返回的结果union step
    V包括顶点与边之间的遍历所需的步骤:VEoutinbothoutEinEbothEoutVinVbothVotherVorder step
    where用于筛选遍历返回的结果。 支持 eqneqltltegtgtebetween 运算符where step

    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    + Print entire section

    HugeGraph Gremlin

    概述

    HugeGraph支持Apache TinkerPop3的图形遍历查询语言Gremlin。 SQL是关系型数据库查询语言,而Gremlin是一种通用的图数据库查询语言,Gremlin可用于创建图的实体(Vertex和Edge)、修改实体内部属性、删除实体,也可执行图的查询操作。

    Gremlin可用于创建图的实体(Vertex和Edge)、修改实体内部属性、删除实体,更主要的是可用于执行图的查询及分析操作。

    TinkerPop Features

    HugeGraph实现了TinkerPop框架,但是并没有实现TinkerPop所有的特性。

    下表列出HugeGraph对TinkerPop各种特性的支持情况:

    Graph Features

    NameDescriptionSupport
    ComputerDetermines if the {@code Graph} implementation supports {@link GraphComputer} based processingfalse
    TransactionsDetermines if the {@code Graph} implementations supports transactions.true
    PersistenceDetermines if the {@code Graph} implementation supports persisting it’s contents natively to disk.This feature does not refer to every graph’s ability to write to disk via the Gremlin IO packages(.e.g. GraphML), unless the graph natively persists to disk via those options somehow. For example,TinkerGraph does not support this feature as it is a pure in-sideEffects graph.true
    ThreadedTransactionsDetermines if the {@code Graph} implementation supports threaded transactions which allow a transaction be executed across multiple threads via {@link Transaction#createThreadedTx()}.false
    ConcurrentAccessDetermines if the {@code Graph} implementation supports more than one connection to the same instance at the same time. For example, Neo4j embedded does not support this feature because concurrent access to the same database files by multiple instances is not possible. However, Neo4j HA could support this feature as each new {@code Graph} instance coordinates with the Neo4j cluster allowing multiple instances to operate on the same database.false

    Vertex Features

    NameDescriptionSupport
    UserSuppliedIdsDetermines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier data type that the {@link Graph} will accept.false
    NumericIdsDetermines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    StringIdsDetermines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    UuidIdsDetermines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    CustomIdsDetermines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB’s {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    AnyIdsDetermines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}.false
    AddPropertyDetermines if an {@link Element} allows properties to be added. This feature is set independently from supporting “data types” and refers to support of calls to {@link Element#property(String, Object)}.true
    RemovePropertyDetermines if an {@link Element} allows properties to be removed.true
    AddVerticesDetermines if a {@link Vertex} can be added to the {@code Graph}.true
    MultiPropertiesDetermines if a {@link Vertex} can support multiple properties with the same key.false
    DuplicateMultiPropertiesDetermines if a {@link Vertex} can support non-unique values on the same key. For this value to be {@code true}, then {@link #supportsMetaProperties()} must also return true. By default this method, just returns what {@link #supportsMultiProperties()} returns.false
    MetaPropertiesDetermines if a {@link Vertex} can support properties on vertex properties. It is assumed that a graph will support all the same data types for meta-properties that are supported for regular properties.false
    RemoveVerticesDetermines if a {@link Vertex} can be removed from the {@code Graph}.true

    Edge Features

    NameDescriptionSupport
    UserSuppliedIdsDetermines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier data type that the {@link Graph} will accept.false
    NumericIdsDetermines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    StringIdsDetermines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    UuidIdsDetermines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    CustomIdsDetermines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB’s {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    AnyIdsDetermines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}.false
    AddPropertyDetermines if an {@link Element} allows properties to be added. This feature is set independently from supporting “data types” and refers to support of calls to {@link Element#property(String, Object)}.true
    RemovePropertyDetermines if an {@link Element} allows properties to be removed.true
    AddEdgesDetermines if an {@link Edge} can be added to a {@code Vertex}.true
    RemoveEdgesDetermines if an {@link Edge} can be removed from a {@code Vertex}.true

    Data Type Features

    NameDescriptionSupport
    BooleanValuestrue
    ByteValuestrue
    DoubleValuestrue
    FloatValuestrue
    IntegerValuestrue
    LongValuestrue
    MapValuesSupports setting of a {@code Map} value. The assumption is that the {@code Map} can contain arbitrary serializable values that may or may not be defined as a feature itselffalse
    MixedListValuesSupports setting of a {@code List} value. The assumption is that the {@code List} can contain arbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is “mixed” it does not need to contain objects of the same type.false
    BooleanArrayValuesfalse
    ByteArrayValuestrue
    DoubleArrayValuesfalse
    FloatArrayValuesfalse
    IntegerArrayValuesfalse
    LongArrayValuesfalse
    SerializableValuesfalse
    StringArrayValuesfalse
    StringValuestrue
    UniformListValuesSupports setting of a {@code List} value. The assumption is that the {@code List} can contain arbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is “uniform” it must contain objects of the same type.false

    Gremlin的步骤

    HugeGraph支持Gremlin的所有步骤。有关Gremlin的完整参考信息,请参与Gremlin官网

    步骤说明文档
    addE在两个顶点之间添加边addE step
    addV将顶点添加到图形addV step
    and确保所有遍历都返回值and step
    as用于向步骤的输出分配变量的步骤调制器as step
    bygrouporder配合使用的步骤调制器by step
    coalesce返回第一个返回结果的遍历coalesce step
    constant返回常量值。 与coalesce配合使用constant step
    count从遍历返回计数count step
    dedup返回已删除重复内容的值dedup step
    drop丢弃值(顶点/边缘)drop step
    fold充当用于计算结果聚合值的屏障fold step
    group根据指定的标签将值分组group step
    has用于筛选属性、顶点和边缘。 支持hasLabelhasIdhasNothas 变体has step
    inject将值注入流中inject step
    is用于通过布尔表达式执行筛选器is step
    limit用于限制遍历中的项数limit step
    local本地包装遍历的某个部分,类似于子查询local step
    not用于生成筛选器的求反结果not step
    optional如果生成了某个结果,则返回指定遍历的结果,否则返回调用元素optional step
    or确保至少有一个遍历会返回值or step
    order按指定的排序顺序返回结果order step
    path返回遍历的完整路径path step
    project将属性投影为映射project step
    properties返回指定标签的属性properties step
    range根据指定的值范围进行筛选range step
    repeat将步骤重复指定的次数。 用于循环repeat step
    sample用于对遍历返回的结果采样sample step
    select用于投影遍历返回的结果select step
    store用于遍历返回的非阻塞聚合store step
    tree将顶点中的路径聚合到树中tree step
    unfold将迭代器作为步骤展开unfold step
    union合并多个遍历返回的结果union step
    V包括顶点与边之间的遍历所需的步骤:VEoutinbothoutEinEbothEoutVinVbothVotherVorder step
    where用于筛选遍历返回的结果。 支持 eqneqltltegtgtebetween 运算符where step

    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    diff --git a/cn/docs/language/index.html b/cn/docs/language/index.html index fbadab298..7453da35e 100644 --- a/cn/docs/language/index.html +++ b/cn/docs/language/index.html @@ -4,7 +4,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    QUERY LANGUAGE


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    QUERY LANGUAGE


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/performance/_print/index.html b/cn/docs/performance/_print/index.html index 35fc149e2..16a5522af 100644 --- a/cn/docs/performance/_print/index.html +++ b/cn/docs/performance/_print/index.html @@ -2,7 +2,7 @@

    1 - HugeGraph BenchMark Performance

    1 测试环境

    1.1 硬件信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD

    1.2 软件信息

    1.2.1 测试用例

    测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:

    • Massive Insertion,批量插入顶点和边,一定数量的顶点或边一次性提交

    • Single Insertion,单条插入,每个顶点或者每条边立即提交

    • Query,主要是图数据库的基本查询操作:

      • Find Neighbors,查询所有顶点的邻居
      • Find Adjacent Nodes,查询所有边的邻接顶点
      • Find Shortest Path,查询第一个顶点到100个随机顶点的最短路径
    • Clustering,基于Louvain Method的社区发现算法

    1.2.2 测试数据集

    测试使用人造数据和真实数据

    本测试用到的数据集规模
    名称vertex数目edge数目文件大小
    email-enron.txt36,691367,6614MB
    com-youtube.ungraph.txt1,157,8062,987,62438.7MB
    amazon0601.txt403,3933,387,38847.9MB
    com-lj.ungraph.txt399796134681189479MB

    1.3 服务配置

    • HugeGraph版本:0.5.6,RestServer和Gremlin Server和backends都在同一台服务器上

      • RocksDB版本:rocksdbjni-5.8.6
    • Titan版本:0.5.4, 使用thrift+Cassandra模式

      • Cassandra版本:cassandra-3.10,commit-log 和 data 共用SSD
    • Neo4j版本:2.0.1

    graphdb-benchmark适配的Titan版本为0.5.4

    2 测试结果

    2.1 Batch插入性能

    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)com-lj.ungraph(3000w)
    HugeGraph0.6295.7115.24367.033
    Titan10.15108.569150.2661217.944
    Neo4j3.88418.93824.890281.537

    说明

    • 表头"()“中数据是数据规模,以边为单位
    • 表中数据是批量插入的时间,单位是s
    • 例如,HugeGraph使用RocksDB插入amazon0601数据集的300w条边,花费5.711s
    结论
    • 批量插入性能 HugeGraph(RocksDB) > Neo4j > Titan(thrift+Cassandra)

    2.2 遍历性能

    2.2.1 术语说明
    • FN(Find Neighbor), 遍历所有vertex, 根据vertex查邻接edge, 通过edge和vertex查other vertex
    • FA(Find Adjacent), 遍历所有edge,根据edge获得source vertex和target vertex
    2.2.2 FN性能
    Backendemail-enron(3.6w)amazon0601(40w)com-youtube.ungraph(120w)com-lj.ungraph(400w)
    HugeGraph4.07245.11866.006609.083
    Titan8.08492.507184.5431099.371
    Neo4j2.42410.53711.609106.919

    说明

    • 表头”()“中数据是数据规模,以顶点为单位
    • 表中数据是遍历顶点花费的时间,单位是s
    • 例如,HugeGraph使用RocksDB后端遍历amazon0601的所有顶点,并查找邻接边和另一顶点,总共耗时45.118s
    2.2.3 FA性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)com-lj.ungraph(3000w)
    HugeGraph1.54010.76411.243151.271
    Titan7.36193.344169.2181085.235
    Neo4j1.6734.7754.28440.507

    说明

    • 表头”()“中数据是数据规模,以边为单位
    • 表中数据是遍历边花费的时间,单位是s
    • 例如,HugeGraph使用RocksDB后端遍历amazon0601的所有边,并查询每条边的两个顶点,总共耗时10.764s
    结论
    • 遍历性能 Neo4j > HugeGraph(RocksDB) > Titan(thrift+Cassandra)

    2.3 HugeGraph-图常用分析方法性能

    术语说明
    • FS(Find Shortest Path), 寻找最短路径
    • K-neighbor,从起始vertex出发,通过K跳边能够到达的所有顶点, 包括1, 2, 3…(K-1), K跳边可达vertex
    • K-out, 从起始vertex出发,恰好经过K跳out边能够到达的顶点
    FS性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)com-lj.ungraph(3000w)
    HugeGraph0.4940.1033.3648.155
    Titan11.8180.239377.709575.678
    Neo4j1.7191.8001.9568.530

    说明

    • 表头”()“中数据是数据规模,以边为单位
    • 表中数据是找到从第一个顶点出发到达随机选择的100个顶点的最短路径的时间,单位是s
    • 例如,HugeGraph使用RocksDB后端在图amazon0601中查找第一个顶点到100个随机顶点的最短路径,总共耗时0.103s
    结论
    • 在数据规模小或者顶点关联关系少的场景下,HugeGraph性能优于Neo4j和Titan
    • 随着数据规模增大且顶点的关联度增高,HugeGraph与Neo4j性能趋近,都远高于Titan
    K-neighbor性能
    顶点深度一度二度三度四度五度六度
    v1时间0.031s0.033s0.048s0.500s11.27sOOM
    v111时间0.027s0.034s0.1151.36sOOM
    v1111时间0.039s0.027s0.052s0.511s10.96sOOM

    说明

    • HugeGraph-Server的JVM内存设置为32GB,数据量过大时会出现OOM
    K-out性能
    顶点深度一度二度三度四度五度六度
    v1时间0.054s0.057s0.109s0.526s3.77sOOM
    10133245350,8301,128,688
    v111时间0.032s0.042s0.136s1.25s20.62sOOM
    1021149441131502,629,970
    v1111时间0.039s0.045s0.053s1.10s2.92sOOM
    101402555508251,070,230

    说明

    • HugeGraph-Server的JVM内存设置为32GB,数据量过大时会出现OOM
    结论
    • FS场景,HugeGraph性能优于Neo4j和Titan
    • K-neighbor和K-out场景,HugeGraph能够实现在5度范围内秒级返回结果

    2.4 图综合性能测试-CW

    数据库规模1000规模5000规模10000规模20000
    HugeGraph(core)20.804242.099744.7801700.547
    Titan45.790820.6332652.2359568.623
    Neo4j5.91350.267142.354460.880

    说明

    • “规模"以顶点为单位
    • 表中数据是社区发现完成需要的时间,单位是s,例如HugeGraph使用RocksDB后端在规模10000的数据集,社区聚合不再变化,需要耗时744.780s
    • CW测试是CRUD的综合评估
    • 该测试中HugeGraph跟Titan一样,没有通过client,直接对core操作
    结论
    • 社区聚类算法性能 Neo4j > HugeGraph > Titan

    2 - HugeGraph-API Performance

    HugeGraph API性能测试主要测试HugeGraph-Server对RESTful API请求的并发处理能力,包括:

    • 顶点/边的单条插入
    • 顶点/边的批量插入
    • 顶点/边的查询

    HugeGraph的每个发布版本的RESTful API的性能测试情况可以参考:

    之前的版本只提供HugeGraph所支持的后端种类中性能最好的API性能测试,从0.5.6版本开始,分别提供了单机和集群的性能情况

    2.1 - v0.5.6 Stand-alone(RocksDB)

    1 测试环境

    被压机器信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD
    • 起压力机器信息:与被压机器同配置
    • 测试工具:apache-Jmeter-2.5.1

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    • Samples – 本次场景中一共完成了多少个线程
    • Average – 平均响应时间
    • Median – 统计意义上面的响应时间的中值
    • 90% Line – 所有线程中90%的线程的响应时间都小于xx
    • Min – 最小响应时间
    • Max – 最大响应时间
    • Error – 出错率
    • Throughput – 吞吐量
    • KB/sec – 以流量做衡量的吞吐量

    2.2 底层存储

    后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph单条插入顶点和边的速度在每秒1w左右
    2. 顶点和边的批量插入速度远大于单条插入速度
    3. 按id查询顶点和边的并发度可达到13000以上,且请求的平均延时小于50ms

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点的最大插入速度:
    image

    ####### 结论:

    • 并发2200,顶点的吞吐量是2026.8,每秒可处理的数据:2026.8*200=405360/s
    边的最大插入速度
    image

    ####### 结论:

    • 并发900,边的吞吐量是776.9,每秒可处理的数据:776.9*500=388450/s

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    顶点的单条插入
    image

    ####### 结论:

    • 并发11500,吞吐量为10730,顶点的单条插入并发能力为11500
    边的单条插入
    image

    ####### 结论:

    • 并发9000,吞吐量是8418,边的单条插入并发能力为9000

    4.3 按id查询

    4.3.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    顶点的按id查询
    image

    ####### 结论:

    • 并发14000,吞吐量是12663,顶点的按id查询的并发能力为14000,平均延时为44ms
    边的按id查询
    image

    ####### 结论:

    • 并发13000,吞吐量是12225,边的按id查询的并发能力为13000,平均延时为12ms

    2.2 - v0.5.6 Cluster(Cassandra)

    1 测试环境

    被压机器信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD
    • 起压力机器信息:与被压机器同配置
    • 测试工具:apache-Jmeter-2.5.1

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    • Samples – 本次场景中一共完成了多少个线程
    • Average – 平均响应时间
    • Median – 统计意义上面的响应时间的中值
    • 90% Line – 所有线程中90%的线程的响应时间都小于xx
    • Min – 最小响应时间
    • Max – 最大响应时间
    • Error – 出错率
    • Throughput – 吞吐量
    • KB/sec – 以流量做衡量的吞吐量

    2.2 底层存储

    后端存储使用15节点Cassandra集群,HugeGraph与Cassandra集群位于不同的服务器,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph单条插入顶点和边的速度分别为9000和4500
    2. 顶点和边的批量插入速度分别为5w/s和15w/s,远大于单条插入速度
    3. 按id查询顶点和边的并发度可达到12000以上,且请求的平均延时小于70ms

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点的最大插入速度:
    image

    ####### 结论:

    • 并发3500,顶点的吞吐量是261,每秒可处理的数据:261*200=52200/s
    边的最大插入速度
    image

    ####### 结论:

    • 并发1000,边的吞吐量是323,每秒可处理的数据:323*500=161500/s

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    顶点的单条插入
    image

    ####### 结论:

    • 并发9000,吞吐量为8400,顶点的单条插入并发能力为9000
    边的单条插入
    image

    ####### 结论:

    • 并发4500,吞吐量是4160,边的单条插入并发能力为4500

    4.3 按id查询

    4.3.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    顶点的按id查询
    image

    ####### 结论:

    • 并发14500,吞吐量是13576,顶点的按id查询的并发能力为14500,平均延时为11ms
    边的按id查询
    image

    ####### 结论:

    • 并发12000,吞吐量是10688,边的按id查询的并发能力为12000,平均延时为63ms

    2.3 - v0.4.4

    1 测试环境

    被压机器信息

    机器编号CPUMemory网卡磁盘
    124 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz61G1000Mbps1.4T HDD
    248 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD
    • 起压力机器信息:与编号 1 机器同配置
    • 测试工具:apache-Jmeter-2.5.1

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    • Samples – 本次场景中一共完成了多少个线程
    • Average – 平均响应时间
    • Median – 统计意义上面的响应时间的中值
    • 90% Line – 所有线程中90%的线程的响应时间都小于xx
    • Min – 最小响应时间
    • Max – 最大响应时间
    • Error – 出错率
    • Throughput – 吞吐量
    • KB/sec – 以流量做衡量的吞吐量

    2.2 底层存储

    后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph每秒能够处理的请求数目上限是7000
    2. 批量插入速度远大于单条插入,在服务器上测试结果达到22w edges/s,37w vertices/s
    3. 后端是RocksDB,增大CPU数目和内存大小可以增大批量插入的性能。CPU和内存扩大一倍,性能增加45%-60%
    4. 批量插入场景,使用SSD替代HDD,性能提升较小,只有3%-5%

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点和边的最大插入速度(高性能服务器,使用SSD存储RocksDB数据):
    image
    结论:
    • 并发1000,边的吞吐量是是451,每秒可处理的数据:451*500条=225500/s
    • 并发2000,顶点的吞吐量是1842.4,每秒可处理的数据:1842.4*200=368480/s

    1. CPU和内存对插入性能的影响(服务器都使用HDD存储RocksDB数据,批量插入)

    image
    结论:
    • 同样使用HDD硬盘,CPU和内存增加了1倍
    • 边:吞吐量从268提升至426,性能提升了约60%
    • 顶点:吞吐量从1263.8提升至1842.4,性能提升了约45%

    2. SSD和HDD对插入性能的影响(高性能服务器,批量插入)

    image
    结论:
    • 边:使用SSD吞吐量451.7,使用HDD吞吐量426.6,性能提升5%
    • 顶点:使用SSD吞吐量1842.4,使用HDD吞吐量1794,性能提升约3%

    3. 不同并发线程数对插入性能的影响(普通服务器,使用HDD存储RocksDB数据)

    image
    结论:
    • 顶点:1000并发,响应时间7ms和1500并发响应时间1028ms差距悬殊,且吞吐量一直保持在1300左右,因此拐点数据应该在1300 ,且并发1300时,响应时间已达到22ms,在可控范围内,相比HugeGraph 0.2(1000并发:平均响应时间8959ms),处理能力出现质的飞跃;
    • 边:从1000并发到2000并发,处理时间过长,超过3s,且吞吐量几乎在270左右浮动,因此继续增大并发线程数吞吐量不会再大幅增长,270 是一个拐点,跟HugeGraph 0.2版本(1000并发:平均响应时间31849ms)相比较,处理能力提升非常明显;

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    image
    结论:
    • 顶点:
      • 4000并发:正常,无错误率,平均耗时小于1ms, 6000并发无错误,平均耗时5ms,在可接受范围内;
      • 8000并发:存在0.01%的错误,已经无法处理,出现connection timeout错误,顶峰应该在7000左右
    • 边:
      • 4000并发:响应时间1ms,6000并发无任何异常,平均响应时间8ms,主要差异在于 IO network recv和send以及CPU);
      • 8000并发:存在0.01%的错误率,平均耗15ms,拐点应该在7000左右,跟顶点结果匹配;

    2.4 - v0.2

    1 测试环境

    1.1 软硬件信息

    起压和被压机器配置相同,基本参数如下:

    CPUMemory网卡
    24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz61G1000Mbps

    测试工具:apache-Jmeter-2.5.1

    1.2 服务配置

    • HugeGraph版本:0.2
    • 后端存储:使用服务内嵌的cassandra-3.10,单点部署;
    • 后端配置修改:修改了cassandra.yaml文件中的以下两个属性,其余选项均保持默认
      batch_size_warn_threshold_in_kb: 1000
       batch_size_fail_threshold_in_kb: 1000
    -
    • HugeGraphServer 与 HugeGremlinServer 与cassandra都在同一机器上启动,server 相关的配置文件除主机和端口有修改外,其余均保持默认。

    1.3 名词解释

    • Samples – 本次场景中一共完成了多少个线程
    • Average – 平均响应时间
    • Median – 统计意义上面的响应时间的中值
    • 90% Line – 所有线程中90%的线程的响应时间都小于xx
    • Min – 最小响应时间
    • Max – 最大响应时间
    • Error – 出错率
    • Troughput – 吞吐量Â
    • KB/sec – 以流量做衡量的吞吐量

    注:时间的单位均为ms

    2 测试结果

    2.1 schema

    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    property_keys33100011201720.00%920.7/sec178.1
    vertex_labels33100012211260.00%920.7/sec193.4
    edge_labels33100022311580.00%920.7/sec242.8

    结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力

    2.2 single 插入

    2.2.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    • 并发量:1000
    • 持续时间:5min
    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    single_insert_vertices3310000110210.00%920.7/sec234.4
    single_insert_edges3310002231530.00%920.7/sec309.1
    结论
    • 顶点:平均响应时间1ms,每个请求插入一条数据,平均每秒处理920个请求,则每秒平均总共处理的数据为1*920约等于920条数据;
    • 边:平均响应时间1ms,每个请求插入一条数据,平均每秒处理920个请求,则每秒平均总共处理的数据为1*920约等于920条数据;
    2.2.2 压力上限测试

    测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    性能指标
    ConcurrencySamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    2000(vertex)661916111030120.00%1842.9/sec469.1
    4000(vertex)131612413114090230.00%3673.1/sec935.0
    5000(vertex)1468121101011351227092230.06%4095.6/sec1046.0
    7000(vertex)1378454161717081886093610.08%3860.3/sec987.1
    2000(edge)62939995310431113190010.00%1750.3/sec587.6
    3000(edge)648364225824042500290010.00%1810.7/sec607.9
    4000(edge)649904199221122211190010.06%1812.5/sec608.5
    结论
    • 顶点:
      • 4000并发:正常,无错误率,平均耗时13ms;
      • 5000并发:每秒处理5000个数据的插入,就会存在0.06%的错误,应该已经处理不了了,顶峰应该在4000
    • 边:
      • 1000并发:响应时间2ms,跟2000并发的响应时间相差较多,主要是 IO network rec和send以及CPU几乎增加了一倍);
      • 2000并发:每秒处理2000个数据的插入,平均耗时953ms,平均每秒处理1750个请求;
      • 3000并发:每秒处理3000个数据的插入,平均耗时2258ms,平均每秒处理1810个请求;
      • 4000并发:每秒处理4000个数据的插入,平均每秒处理1812个请求;

    2.3 batch 插入

    2.3.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    • 并发量:1000
    • 持续时间:5min
    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    batch_insert_vertices371628959959597041798520.00%103.4/sec393.3
    batch_insert_edges10800318493454435132435357470.00%28.8/sec814.9
    结论
    • 顶点:平均响应时间为8959ms,处理时间过长。每个请求插入199条数据,平均每秒处理103个请求,则每秒平均总共处理的数据为199*131约等于2w条数据;
    • 边:平均响应时间31849ms,处理时间过长。每个请求插入499个数据,平均每秒处理28个请求,则每秒平均总共处理的数据为28*499约等于13900条数据;

    3 - HugeGraph-Loader Performance

    使用场景

    当要批量插入的图数据(包括顶点和边)条数为billion级别及以下,或者总数据量小于TB时,可以采用HugeGraph-Loader工具持续、高速导入图数据

    性能

    测试均采用网址数据的边数据

    RocksDB单机性能

    • 关闭label index,22.8w edges/s
    • 开启label index,15.3w edges/s

    Cassandra集群性能

    • 默认开启label index,6.3w edges/s

    4 -

    1 测试环境

    1.1 硬件信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD

    1.2 软件信息

    1.2.1 测试用例

    测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:

    • Massive Insertion,批量插入顶点和边,一定数量的顶点或边一次性提交

    • Single Insertion,单条插入,每个顶点或者每条边立即提交

    • Query,主要是图数据库的基本查询操作:

      • Find Neighbors,查询所有顶点的邻居
      • Find Adjacent Nodes,查询所有边的邻接顶点
      • Find Shortest Path,查询第一个顶点到100个随机顶点的最短路径
    • Clustering,基于Louvain Method的社区发现算法

    1.2.2 测试数据集

    测试使用人造数据和真实数据

    本测试用到的数据集规模
    名称vertex数目edge数目文件大小
    email-enron.txt36,691367,6614MB
    com-youtube.ungraph.txt1,157,8062,987,62438.7MB
    amazon0601.txt403,3933,387,38847.9MB

    1.3 服务配置

    • HugeGraph版本:0.4.4,RestServer和Gremlin Server和backends都在同一台服务器上
    • Cassandra版本:cassandra-3.10,commit-log 和data共用SSD
    • RocksDB版本:rocksdbjni-5.8.6
    • Titan版本:0.5.4, 使用thrift+Cassandra模式

    graphdb-benchmark适配的Titan版本为0.5.4

    2 测试结果

    2.1 Batch插入性能

    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan9.51688.123111.586
    RocksDB2.34514.07616.636
    Cassandra11.930108.709101.959
    Memory3.07715.20413.841

    说明

    • 表头"()“中数据是数据规模,以边为单位
    • 表中数据是批量插入的时间,单位是s
    • 例如,HugeGraph使用RocksDB插入amazon0601数据集的300w条边,花费14.076s,速度约为21w edges/s
    结论
    • RocksDB和Memory后端插入性能优于Cassandra
    • HugeGraph和Titan同样使用Cassandra作为后端的情况下,插入性能接近

    2.2 遍历性能

    2.2.1 术语说明
    • FN(Find Neighbor), 遍历所有vertex, 根据vertex查邻接edge, 通过edge和vertex查other vertex
    • FA(Find Adjacent), 遍历所有edge,根据edge获得source vertex和target vertex
    2.2.2 FN性能
    Backendemail-enron(3.6w)amazon0601(40w)com-youtube.ungraph(120w)
    Titan7.72470.935128.884
    RocksDB8.87665.85263.388
    Cassandra13.125126.959102.580
    Memory22.309207.411165.609

    说明

    • 表头”()“中数据是数据规模,以顶点为单位
    • 表中数据是遍历顶点花费的时间,单位是s
    • 例如,HugeGraph使用RocksDB后端遍历amazon0601的所有顶点,并查找邻接边和另一顶点,总共耗时65.852s
    2.2.3 FA性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan7.11963.353115.633
    RocksDB6.03264.52652.721
    Cassandra9.410102.76694.197
    Memory12.340195.444140.89

    说明

    • 表头”()“中数据是数据规模,以边为单位
    • 表中数据是遍历边花费的时间,单位是s
    • 例如,HugeGraph使用RocksDB后端遍历amazon0601的所有边,并查询每条边的两个顶点,总共耗时64.526s
    结论
    • HugeGraph RocksDB > Titan thrift+Cassandra > HugeGraph Cassandra > HugeGraph Memory

    2.3 HugeGraph-图常用分析方法性能

    术语说明
    • FS(Find Shortest Path), 寻找最短路径
    • K-neighbor,从起始vertex出发,通过K跳边能够到达的所有顶点, 包括1, 2, 3…(K-1), K跳边可达vertex
    • K-out, 从起始vertex出发,恰好经过K跳out边能够到达的顶点
    FS性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan11.3330.313376.06
    RocksDB44.3912.221268.792
    Cassandra39.8453.337331.113
    Memory35.6382.059388.987

    说明

    • 表头”()“中数据是数据规模,以边为单位
    • 表中数据是找到从第一个顶点出发到达随机选择的100个顶点的最短路径的时间,单位是s
    • 例如,HugeGraph使用RocksDB查找第一个顶点到100个随机顶点的最短路径,总共耗时2.059s
    结论
    • 在数据规模小或者顶点关联关系少的场景下,Titan最短路径性能优于HugeGraph
    • 随着数据规模增大且顶点的关联度增高,HugeGraph最短路径性能优于Titan
    K-neighbor性能
    顶点深度一度二度三度四度五度六度
    v1时间0.031s0.033s0.048s0.500s11.27sOOM
    v111时间0.027s0.034s0.1151.36sOOM
    v1111时间0.039s0.027s0.052s0.511s10.96sOOM

    说明

    • HugeGraph-Server的JVM内存设置为32GB,数据量过大时会出现OOM
    K-out性能
    顶点深度一度二度三度四度五度六度
    v1时间0.054s0.057s0.109s0.526s3.77sOOM
    10133245350,8301,128,688
    v111时间0.032s0.042s0.136s1.25s20.62sOOM
    1021149441131502,629,970
    v1111时间0.039s0.045s0.053s1.10s2.92sOOM
    101402555508251,070,230

    说明

    • HugeGraph-Server的JVM内存设置为32GB,数据量过大时会出现OOM
    结论
    • FS场景,HugeGraph性能优于Titan
    • K-neighbor和K-out场景,HugeGraph能够实现在5度范围内秒级返回结果

    2.4 图综合性能测试-CW

    数据库规模1000规模5000规模10000规模20000
    Titan45.943849.1682737.1179791.46
    Memory(core)41.0771825.905**
    Cassandra(core)39.783862.7442423.1366564.191
    RocksDB(core)33.383199.894763.8691677.813

    说明

    • “规模"以顶点为单位
    • 表中数据是社区发现完成需要的时间,单位是s,例如HugeGraph使用RocksDB后端在规模10000的数据集,社区聚合不再变化,需要耗时763.869s
    • “*“表示超过10000s未完成
    • CW测试是CRUD的综合评估
    • 后三者分别是HugeGraph的不同后端,该测试中HugeGraph跟Titan一样,没有通过client,直接对core操作
    结论
    • HugeGraph在使用Cassandra后端时,性能略优于Titan,随着数据规模的增大,优势越来越明显,数据规模20000时,比Titan快30%
    • HugeGraph在使用RocksDB后端时,性能远高于Titan和HugeGraph的Cassandra后端,分别比两者快了6倍和4倍
    +

    1.3 名词解释

    注:时间的单位均为ms

    2 测试结果

    2.1 schema

    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    property_keys33100011201720.00%920.7/sec178.1
    vertex_labels33100012211260.00%920.7/sec193.4
    edge_labels33100022311580.00%920.7/sec242.8

    结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力

    2.2 single 插入

    2.2.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    single_insert_vertices3310000110210.00%920.7/sec234.4
    single_insert_edges3310002231530.00%920.7/sec309.1
    结论
    2.2.2 压力上限测试

    测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    性能指标
    ConcurrencySamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    2000(vertex)661916111030120.00%1842.9/sec469.1
    4000(vertex)131612413114090230.00%3673.1/sec935.0
    5000(vertex)1468121101011351227092230.06%4095.6/sec1046.0
    7000(vertex)1378454161717081886093610.08%3860.3/sec987.1
    2000(edge)62939995310431113190010.00%1750.3/sec587.6
    3000(edge)648364225824042500290010.00%1810.7/sec607.9
    4000(edge)649904199221122211190010.06%1812.5/sec608.5
    结论

    2.3 batch 插入

    2.3.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    batch_insert_vertices371628959959597041798520.00%103.4/sec393.3
    batch_insert_edges10800318493454435132435357470.00%28.8/sec814.9
    结论

    3 - HugeGraph-Loader Performance

    使用场景

    当要批量插入的图数据(包括顶点和边)条数为billion级别及以下,或者总数据量小于TB时,可以采用HugeGraph-Loader工具持续、高速导入图数据

    性能

    测试均采用网址数据的边数据

    RocksDB单机性能

    Cassandra集群性能

    4 -

    1 测试环境

    1.1 硬件信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD

    1.2 软件信息

    1.2.1 测试用例

    测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:

    1.2.2 测试数据集

    测试使用人造数据和真实数据

    本测试用到的数据集规模
    名称vertex数目edge数目文件大小
    email-enron.txt36,691367,6614MB
    com-youtube.ungraph.txt1,157,8062,987,62438.7MB
    amazon0601.txt403,3933,387,38847.9MB

    1.3 服务配置

    graphdb-benchmark适配的Titan版本为0.5.4

    2 测试结果

    2.1 Batch插入性能

    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan9.51688.123111.586
    RocksDB2.34514.07616.636
    Cassandra11.930108.709101.959
    Memory3.07715.20413.841

    说明

    结论

    2.2 遍历性能

    2.2.1 术语说明
    2.2.2 FN性能
    Backendemail-enron(3.6w)amazon0601(40w)com-youtube.ungraph(120w)
    Titan7.72470.935128.884
    RocksDB8.87665.85263.388
    Cassandra13.125126.959102.580
    Memory22.309207.411165.609

    说明

    2.2.3 FA性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan7.11963.353115.633
    RocksDB6.03264.52652.721
    Cassandra9.410102.76694.197
    Memory12.340195.444140.89

    说明

    结论

    2.3 HugeGraph-图常用分析方法性能

    术语说明
    FS性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan11.3330.313376.06
    RocksDB44.3912.221268.792
    Cassandra39.8453.337331.113
    Memory35.6382.059388.987

    说明

    结论
    K-neighbor性能
    顶点深度一度二度三度四度五度六度
    v1时间0.031s0.033s0.048s0.500s11.27sOOM
    v111时间0.027s0.034s0.1151.36sOOM
    v1111时间0.039s0.027s0.052s0.511s10.96sOOM

    说明

    K-out性能
    顶点深度一度二度三度四度五度六度
    v1时间0.054s0.057s0.109s0.526s3.77sOOM
    10133245350,8301,128,688
    v111时间0.032s0.042s0.136s1.25s20.62sOOM
    1021149441131502,629,970
    v1111时间0.039s0.045s0.053s1.10s2.92sOOM
    101402555508251,070,230

    说明

    结论

    2.4 图综合性能测试-CW

    数据库规模1000规模5000规模10000规模20000
    Titan45.943849.1682737.1179791.46
    Memory(core)41.0771825.905**
    Cassandra(core)39.783862.7442423.1366564.191
    RocksDB(core)33.383199.894763.8691677.813

    说明

    结论
    diff --git a/cn/docs/performance/api-preformance/_print/index.html b/cn/docs/performance/api-preformance/_print/index.html index 47453135b..9b8a394a2 100644 --- a/cn/docs/performance/api-preformance/_print/index.html +++ b/cn/docs/performance/api-preformance/_print/index.html @@ -10,7 +10,7 @@

    This is the multi-page printable view of this section. Click here to print.

    Return to the regular view of this page.

    HugeGraph-API Performance

    HugeGraph API性能测试主要测试HugeGraph-Server对RESTful API请求的并发处理能力,包括:

    • 顶点/边的单条插入
    • 顶点/边的批量插入
    • 顶点/边的查询

    HugeGraph的每个发布版本的RESTful API的性能测试情况可以参考:

    之前的版本只提供HugeGraph所支持的后端种类中性能最好的API性能测试,从0.5.6版本开始,分别提供了单机和集群的性能情况

    1 - v0.5.6 Stand-alone(RocksDB)

    1 测试环境

    被压机器信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD
    • 起压力机器信息:与被压机器同配置
    • 测试工具:apache-Jmeter-2.5.1

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    • Samples – 本次场景中一共完成了多少个线程
    • Average – 平均响应时间
    • Median – 统计意义上面的响应时间的中值
    • 90% Line – 所有线程中90%的线程的响应时间都小于xx
    • Min – 最小响应时间
    • Max – 最大响应时间
    • Error – 出错率
    • Throughput – 吞吐量
    • KB/sec – 以流量做衡量的吞吐量

    2.2 底层存储

    后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph单条插入顶点和边的速度在每秒1w左右
    2. 顶点和边的批量插入速度远大于单条插入速度
    3. 按id查询顶点和边的并发度可达到13000以上,且请求的平均延时小于50ms

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点的最大插入速度:
    image

    ####### 结论:

    • 并发2200,顶点的吞吐量是2026.8,每秒可处理的数据:2026.8*200=405360/s
    边的最大插入速度
    image

    ####### 结论:

    • 并发900,边的吞吐量是776.9,每秒可处理的数据:776.9*500=388450/s

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    顶点的单条插入
    image

    ####### 结论:

    • 并发11500,吞吐量为10730,顶点的单条插入并发能力为11500
    边的单条插入
    image

    ####### 结论:

    • 并发9000,吞吐量是8418,边的单条插入并发能力为9000

    4.3 按id查询

    4.3.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    顶点的按id查询
    image

    ####### 结论:

    • 并发14000,吞吐量是12663,顶点的按id查询的并发能力为14000,平均延时为44ms
    边的按id查询
    image

    ####### 结论:

    • 并发13000,吞吐量是12225,边的按id查询的并发能力为13000,平均延时为12ms

    2 - v0.5.6 Cluster(Cassandra)

    1 测试环境

    被压机器信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD
    • 起压力机器信息:与被压机器同配置
    • 测试工具:apache-Jmeter-2.5.1

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    • Samples – 本次场景中一共完成了多少个线程
    • Average – 平均响应时间
    • Median – 统计意义上面的响应时间的中值
    • 90% Line – 所有线程中90%的线程的响应时间都小于xx
    • Min – 最小响应时间
    • Max – 最大响应时间
    • Error – 出错率
    • Throughput – 吞吐量
    • KB/sec – 以流量做衡量的吞吐量

    2.2 底层存储

    后端存储使用15节点Cassandra集群,HugeGraph与Cassandra集群位于不同的服务器,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph单条插入顶点和边的速度分别为9000和4500
    2. 顶点和边的批量插入速度分别为5w/s和15w/s,远大于单条插入速度
    3. 按id查询顶点和边的并发度可达到12000以上,且请求的平均延时小于70ms

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点的最大插入速度:
    image

    ####### 结论:

    • 并发3500,顶点的吞吐量是261,每秒可处理的数据:261*200=52200/s
    边的最大插入速度
    image

    ####### 结论:

    • 并发1000,边的吞吐量是323,每秒可处理的数据:323*500=161500/s

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    顶点的单条插入
    image

    ####### 结论:

    • 并发9000,吞吐量为8400,顶点的单条插入并发能力为9000
    边的单条插入
    image

    ####### 结论:

    • 并发4500,吞吐量是4160,边的单条插入并发能力为4500

    4.3 按id查询

    4.3.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    顶点的按id查询
    image

    ####### 结论:

    • 并发14500,吞吐量是13576,顶点的按id查询的并发能力为14500,平均延时为11ms
    边的按id查询
    image

    ####### 结论:

    • 并发12000,吞吐量是10688,边的按id查询的并发能力为12000,平均延时为63ms

    3 - v0.4.4

    1 测试环境

    被压机器信息

    机器编号CPUMemory网卡磁盘
    124 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz61G1000Mbps1.4T HDD
    248 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD
    • 起压力机器信息:与编号 1 机器同配置
    • 测试工具:apache-Jmeter-2.5.1

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    • Samples – 本次场景中一共完成了多少个线程
    • Average – 平均响应时间
    • Median – 统计意义上面的响应时间的中值
    • 90% Line – 所有线程中90%的线程的响应时间都小于xx
    • Min – 最小响应时间
    • Max – 最大响应时间
    • Error – 出错率
    • Throughput – 吞吐量
    • KB/sec – 以流量做衡量的吞吐量

    2.2 底层存储

    后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph每秒能够处理的请求数目上限是7000
    2. 批量插入速度远大于单条插入,在服务器上测试结果达到22w edges/s,37w vertices/s
    3. 后端是RocksDB,增大CPU数目和内存大小可以增大批量插入的性能。CPU和内存扩大一倍,性能增加45%-60%
    4. 批量插入场景,使用SSD替代HDD,性能提升较小,只有3%-5%

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点和边的最大插入速度(高性能服务器,使用SSD存储RocksDB数据):
    image
    结论:
    • 并发1000,边的吞吐量是是451,每秒可处理的数据:451*500条=225500/s
    • 并发2000,顶点的吞吐量是1842.4,每秒可处理的数据:1842.4*200=368480/s

    1. CPU和内存对插入性能的影响(服务器都使用HDD存储RocksDB数据,批量插入)

    image
    结论:
    • 同样使用HDD硬盘,CPU和内存增加了1倍
    • 边:吞吐量从268提升至426,性能提升了约60%
    • 顶点:吞吐量从1263.8提升至1842.4,性能提升了约45%

    2. SSD和HDD对插入性能的影响(高性能服务器,批量插入)

    image
    结论:
    • 边:使用SSD吞吐量451.7,使用HDD吞吐量426.6,性能提升5%
    • 顶点:使用SSD吞吐量1842.4,使用HDD吞吐量1794,性能提升约3%

    3. 不同并发线程数对插入性能的影响(普通服务器,使用HDD存储RocksDB数据)

    image
    结论:
    • 顶点:1000并发,响应时间7ms和1500并发响应时间1028ms差距悬殊,且吞吐量一直保持在1300左右,因此拐点数据应该在1300 ,且并发1300时,响应时间已达到22ms,在可控范围内,相比HugeGraph 0.2(1000并发:平均响应时间8959ms),处理能力出现质的飞跃;
    • 边:从1000并发到2000并发,处理时间过长,超过3s,且吞吐量几乎在270左右浮动,因此继续增大并发线程数吞吐量不会再大幅增长,270 是一个拐点,跟HugeGraph 0.2版本(1000并发:平均响应时间31849ms)相比较,处理能力提升非常明显;

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    image
    结论:
    • 顶点:
      • 4000并发:正常,无错误率,平均耗时小于1ms, 6000并发无错误,平均耗时5ms,在可接受范围内;
      • 8000并发:存在0.01%的错误,已经无法处理,出现connection timeout错误,顶峰应该在7000左右
    • 边:
      • 4000并发:响应时间1ms,6000并发无任何异常,平均响应时间8ms,主要差异在于 IO network recv和send以及CPU);
      • 8000并发:存在0.01%的错误率,平均耗15ms,拐点应该在7000左右,跟顶点结果匹配;

    4 - v0.2

    1 测试环境

    1.1 软硬件信息

    起压和被压机器配置相同,基本参数如下:

    CPUMemory网卡
    24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz61G1000Mbps

    测试工具:apache-Jmeter-2.5.1

    1.2 服务配置

    • HugeGraph版本:0.2
    • 后端存储:使用服务内嵌的cassandra-3.10,单点部署;
    • 后端配置修改:修改了cassandra.yaml文件中的以下两个属性,其余选项均保持默认
      batch_size_warn_threshold_in_kb: 1000
       batch_size_fail_threshold_in_kb: 1000
    -
    • HugeGraphServer 与 HugeGremlinServer 与cassandra都在同一机器上启动,server 相关的配置文件除主机和端口有修改外,其余均保持默认。

    1.3 名词解释

    • Samples – 本次场景中一共完成了多少个线程
    • Average – 平均响应时间
    • Median – 统计意义上面的响应时间的中值
    • 90% Line – 所有线程中90%的线程的响应时间都小于xx
    • Min – 最小响应时间
    • Max – 最大响应时间
    • Error – 出错率
    • Troughput – 吞吐量Â
    • KB/sec – 以流量做衡量的吞吐量

    注:时间的单位均为ms

    2 测试结果

    2.1 schema

    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    property_keys33100011201720.00%920.7/sec178.1
    vertex_labels33100012211260.00%920.7/sec193.4
    edge_labels33100022311580.00%920.7/sec242.8

    结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力

    2.2 single 插入

    2.2.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    • 并发量:1000
    • 持续时间:5min
    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    single_insert_vertices3310000110210.00%920.7/sec234.4
    single_insert_edges3310002231530.00%920.7/sec309.1
    结论
    • 顶点:平均响应时间1ms,每个请求插入一条数据,平均每秒处理920个请求,则每秒平均总共处理的数据为1*920约等于920条数据;
    • 边:平均响应时间1ms,每个请求插入一条数据,平均每秒处理920个请求,则每秒平均总共处理的数据为1*920约等于920条数据;
    2.2.2 压力上限测试

    测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    性能指标
    ConcurrencySamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    2000(vertex)661916111030120.00%1842.9/sec469.1
    4000(vertex)131612413114090230.00%3673.1/sec935.0
    5000(vertex)1468121101011351227092230.06%4095.6/sec1046.0
    7000(vertex)1378454161717081886093610.08%3860.3/sec987.1
    2000(edge)62939995310431113190010.00%1750.3/sec587.6
    3000(edge)648364225824042500290010.00%1810.7/sec607.9
    4000(edge)649904199221122211190010.06%1812.5/sec608.5
    结论
    • 顶点:
      • 4000并发:正常,无错误率,平均耗时13ms;
      • 5000并发:每秒处理5000个数据的插入,就会存在0.06%的错误,应该已经处理不了了,顶峰应该在4000
    • 边:
      • 1000并发:响应时间2ms,跟2000并发的响应时间相差较多,主要是 IO network rec和send以及CPU几乎增加了一倍);
      • 2000并发:每秒处理2000个数据的插入,平均耗时953ms,平均每秒处理1750个请求;
      • 3000并发:每秒处理3000个数据的插入,平均耗时2258ms,平均每秒处理1810个请求;
      • 4000并发:每秒处理4000个数据的插入,平均每秒处理1812个请求;

    2.3 batch 插入

    2.3.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    • 并发量:1000
    • 持续时间:5min
    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    batch_insert_vertices371628959959597041798520.00%103.4/sec393.3
    batch_insert_edges10800318493454435132435357470.00%28.8/sec814.9
    结论
    • 顶点:平均响应时间为8959ms,处理时间过长。每个请求插入199条数据,平均每秒处理103个请求,则每秒平均总共处理的数据为199*131约等于2w条数据;
    • 边:平均响应时间31849ms,处理时间过长。每个请求插入499个数据,平均每秒处理28个请求,则每秒平均总共处理的数据为28*499约等于13900条数据;
    +

    1.3 名词解释

    注:时间的单位均为ms

    2 测试结果

    2.1 schema

    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    property_keys33100011201720.00%920.7/sec178.1
    vertex_labels33100012211260.00%920.7/sec193.4
    edge_labels33100022311580.00%920.7/sec242.8

    结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力

    2.2 single 插入

    2.2.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    single_insert_vertices3310000110210.00%920.7/sec234.4
    single_insert_edges3310002231530.00%920.7/sec309.1
    结论
    2.2.2 压力上限测试

    测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    性能指标
    ConcurrencySamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    2000(vertex)661916111030120.00%1842.9/sec469.1
    4000(vertex)131612413114090230.00%3673.1/sec935.0
    5000(vertex)1468121101011351227092230.06%4095.6/sec1046.0
    7000(vertex)1378454161717081886093610.08%3860.3/sec987.1
    2000(edge)62939995310431113190010.00%1750.3/sec587.6
    3000(edge)648364225824042500290010.00%1810.7/sec607.9
    4000(edge)649904199221122211190010.06%1812.5/sec608.5
    结论

    2.3 batch 插入

    2.3.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    batch_insert_vertices371628959959597041798520.00%103.4/sec393.3
    batch_insert_edges10800318493454435132435357470.00%28.8/sec814.9
    结论
    diff --git a/cn/docs/performance/api-preformance/hugegraph-api-0.2/index.html b/cn/docs/performance/api-preformance/hugegraph-api-0.2/index.html index d84ee45f8..9f0f36ebf 100644 --- a/cn/docs/performance/api-preformance/hugegraph-api-0.2/index.html +++ b/cn/docs/performance/api-preformance/hugegraph-api-0.2/index.html @@ -35,7 +35,7 @@ Create project issue Print entire section

    v0.2

    1 测试环境

    1.1 软硬件信息

    起压和被压机器配置相同,基本参数如下:

    CPUMemory网卡
    24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz61G1000Mbps

    测试工具:apache-Jmeter-2.5.1

    1.2 服务配置

      batch_size_warn_threshold_in_kb: 1000
       batch_size_fail_threshold_in_kb: 1000
    -

    1.3 名词解释

    注:时间的单位均为ms

    2 测试结果

    2.1 schema

    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    property_keys33100011201720.00%920.7/sec178.1
    vertex_labels33100012211260.00%920.7/sec193.4
    edge_labels33100022311580.00%920.7/sec242.8

    结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力

    2.2 single 插入

    2.2.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    single_insert_vertices3310000110210.00%920.7/sec234.4
    single_insert_edges3310002231530.00%920.7/sec309.1
    结论
    2.2.2 压力上限测试

    测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    性能指标
    ConcurrencySamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    2000(vertex)661916111030120.00%1842.9/sec469.1
    4000(vertex)131612413114090230.00%3673.1/sec935.0
    5000(vertex)1468121101011351227092230.06%4095.6/sec1046.0
    7000(vertex)1378454161717081886093610.08%3860.3/sec987.1
    2000(edge)62939995310431113190010.00%1750.3/sec587.6
    3000(edge)648364225824042500290010.00%1810.7/sec607.9
    4000(edge)649904199221122211190010.06%1812.5/sec608.5
    结论

    2.3 batch 插入

    2.3.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    batch_insert_vertices371628959959597041798520.00%103.4/sec393.3
    batch_insert_edges10800318493454435132435357470.00%28.8/sec814.9
    结论

    Last modified April 17, 2022: rebuild doc (ef36544)
    +

    1.3 名词解释

    注:时间的单位均为ms

    2 测试结果

    2.1 schema

    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    property_keys33100011201720.00%920.7/sec178.1
    vertex_labels33100012211260.00%920.7/sec193.4
    edge_labels33100022311580.00%920.7/sec242.8

    结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力

    2.2 single 插入

    2.2.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    single_insert_vertices3310000110210.00%920.7/sec234.4
    single_insert_edges3310002231530.00%920.7/sec309.1
    结论
    2.2.2 压力上限测试

    测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    性能指标
    ConcurrencySamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    2000(vertex)661916111030120.00%1842.9/sec469.1
    4000(vertex)131612413114090230.00%3673.1/sec935.0
    5000(vertex)1468121101011351227092230.06%4095.6/sec1046.0
    7000(vertex)1378454161717081886093610.08%3860.3/sec987.1
    2000(edge)62939995310431113190010.00%1750.3/sec587.6
    3000(edge)648364225824042500290010.00%1810.7/sec607.9
    4000(edge)649904199221122211190010.06%1812.5/sec608.5
    结论

    2.3 batch 插入

    2.3.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    batch_insert_vertices371628959959597041798520.00%103.4/sec393.3
    batch_insert_edges10800318493454435132435357470.00%28.8/sec814.9
    结论

    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/performance/api-preformance/hugegraph-api-0.4.4/index.html b/cn/docs/performance/api-preformance/hugegraph-api-0.4.4/index.html index 1f53f3ad0..65a96296c 100644 --- a/cn/docs/performance/api-preformance/hugegraph-api-0.4.4/index.html +++ b/cn/docs/performance/api-preformance/hugegraph-api-0.4.4/index.html @@ -32,7 +32,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    v0.4.4

    1 测试环境

    被压机器信息

    机器编号CPUMemory网卡磁盘
    124 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz61G1000Mbps1.4T HDD
    248 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    2.2 底层存储

    后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph每秒能够处理的请求数目上限是7000
    2. 批量插入速度远大于单条插入,在服务器上测试结果达到22w edges/s,37w vertices/s
    3. 后端是RocksDB,增大CPU数目和内存大小可以增大批量插入的性能。CPU和内存扩大一倍,性能增加45%-60%
    4. 批量插入场景,使用SSD替代HDD,性能提升较小,只有3%-5%

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点和边的最大插入速度(高性能服务器,使用SSD存储RocksDB数据):
    image
    结论:

    1. CPU和内存对插入性能的影响(服务器都使用HDD存储RocksDB数据,批量插入)

    image
    结论:

    2. SSD和HDD对插入性能的影响(高性能服务器,批量插入)

    image
    结论:

    3. 不同并发线程数对插入性能的影响(普通服务器,使用HDD存储RocksDB数据)

    image
    结论:

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    image
    结论:

    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    v0.4.4

    1 测试环境

    被压机器信息

    机器编号CPUMemory网卡磁盘
    124 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz61G1000Mbps1.4T HDD
    248 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    2.2 底层存储

    后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph每秒能够处理的请求数目上限是7000
    2. 批量插入速度远大于单条插入,在服务器上测试结果达到22w edges/s,37w vertices/s
    3. 后端是RocksDB,增大CPU数目和内存大小可以增大批量插入的性能。CPU和内存扩大一倍,性能增加45%-60%
    4. 批量插入场景,使用SSD替代HDD,性能提升较小,只有3%-5%

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点和边的最大插入速度(高性能服务器,使用SSD存储RocksDB数据):
    image
    结论:

    1. CPU和内存对插入性能的影响(服务器都使用HDD存储RocksDB数据,批量插入)

    image
    结论:

    2. SSD和HDD对插入性能的影响(高性能服务器,批量插入)

    image
    结论:

    3. 不同并发线程数对插入性能的影响(普通服务器,使用HDD存储RocksDB数据)

    image
    结论:

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    image
    结论:

    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/performance/api-preformance/hugegraph-api-0.5.6-cassandra/index.html b/cn/docs/performance/api-preformance/hugegraph-api-0.5.6-cassandra/index.html index 76a4fe9fc..08ac110ee 100644 --- a/cn/docs/performance/api-preformance/hugegraph-api-0.5.6-cassandra/index.html +++ b/cn/docs/performance/api-preformance/hugegraph-api-0.5.6-cassandra/index.html @@ -35,7 +35,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    v0.5.6 Cluster(Cassandra)

    1 测试环境

    被压机器信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    2.2 底层存储

    后端存储使用15节点Cassandra集群,HugeGraph与Cassandra集群位于不同的服务器,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph单条插入顶点和边的速度分别为9000和4500
    2. 顶点和边的批量插入速度分别为5w/s和15w/s,远大于单条插入速度
    3. 按id查询顶点和边的并发度可达到12000以上,且请求的平均延时小于70ms

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点的最大插入速度:
    image

    ####### 结论:

    边的最大插入速度
    image

    ####### 结论:

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    顶点的单条插入
    image

    ####### 结论:

    边的单条插入
    image

    ####### 结论:

    4.3 按id查询

    4.3.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    顶点的按id查询
    image

    ####### 结论:

    边的按id查询
    image

    ####### 结论:


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    v0.5.6 Cluster(Cassandra)

    1 测试环境

    被压机器信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    2.2 底层存储

    后端存储使用15节点Cassandra集群,HugeGraph与Cassandra集群位于不同的服务器,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph单条插入顶点和边的速度分别为9000和4500
    2. 顶点和边的批量插入速度分别为5w/s和15w/s,远大于单条插入速度
    3. 按id查询顶点和边的并发度可达到12000以上,且请求的平均延时小于70ms

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点的最大插入速度:
    image

    ####### 结论:

    边的最大插入速度
    image

    ####### 结论:

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    顶点的单条插入
    image

    ####### 结论:

    边的单条插入
    image

    ####### 结论:

    4.3 按id查询

    4.3.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    顶点的按id查询
    image

    ####### 结论:

    边的按id查询
    image

    ####### 结论:


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/performance/api-preformance/hugegraph-api-0.5.6-rocksdb/index.html b/cn/docs/performance/api-preformance/hugegraph-api-0.5.6-rocksdb/index.html index 5526c960f..3c159083b 100644 --- a/cn/docs/performance/api-preformance/hugegraph-api-0.5.6-rocksdb/index.html +++ b/cn/docs/performance/api-preformance/hugegraph-api-0.5.6-rocksdb/index.html @@ -35,7 +35,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    v0.5.6 Stand-alone(RocksDB)

    1 测试环境

    被压机器信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    2.2 底层存储

    后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph单条插入顶点和边的速度在每秒1w左右
    2. 顶点和边的批量插入速度远大于单条插入速度
    3. 按id查询顶点和边的并发度可达到13000以上,且请求的平均延时小于50ms

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点的最大插入速度:
    image

    ####### 结论:

    边的最大插入速度
    image

    ####### 结论:

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    顶点的单条插入
    image

    ####### 结论:

    边的单条插入
    image

    ####### 结论:

    4.3 按id查询

    4.3.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    顶点的按id查询
    image

    ####### 结论:

    边的按id查询
    image

    ####### 结论:


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    v0.5.6 Stand-alone(RocksDB)

    1 测试环境

    被压机器信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    2.2 底层存储

    后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph单条插入顶点和边的速度在每秒1w左右
    2. 顶点和边的批量插入速度远大于单条插入速度
    3. 按id查询顶点和边的并发度可达到13000以上,且请求的平均延时小于50ms

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点的最大插入速度:
    image

    ####### 结论:

    边的最大插入速度
    image

    ####### 结论:

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    顶点的单条插入
    image

    ####### 结论:

    边的单条插入
    image

    ####### 结论:

    4.3 按id查询

    4.3.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    顶点的按id查询
    image

    ####### 结论:

    边的按id查询
    image

    ####### 结论:


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/performance/api-preformance/index.html b/cn/docs/performance/api-preformance/index.html index c69ab36fe..8f0cf7c25 100644 --- a/cn/docs/performance/api-preformance/index.html +++ b/cn/docs/performance/api-preformance/index.html @@ -12,7 +12,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph-API Performance

    HugeGraph API性能测试主要测试HugeGraph-Server对RESTful API请求的并发处理能力,包括:

    HugeGraph的每个发布版本的RESTful API的性能测试情况可以参考:

    之前的版本只提供HugeGraph所支持的后端种类中性能最好的API性能测试,从0.5.6版本开始,分别提供了单机和集群的性能情况


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    HugeGraph-API Performance

    HugeGraph API性能测试主要测试HugeGraph-Server对RESTful API请求的并发处理能力,包括:

    HugeGraph的每个发布版本的RESTful API的性能测试情况可以参考:

    之前的版本只提供HugeGraph所支持的后端种类中性能最好的API性能测试,从0.5.6版本开始,分别提供了单机和集群的性能情况


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/performance/hugegraph-benchmark-0.4.4/index.html b/cn/docs/performance/hugegraph-benchmark-0.4.4/index.html index 23e7132ee..8bbe7517d 100644 --- a/cn/docs/performance/hugegraph-benchmark-0.4.4/index.html +++ b/cn/docs/performance/hugegraph-benchmark-0.4.4/index.html @@ -61,7 +61,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    1 测试环境

    1.1 硬件信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD

    1.2 软件信息

    1.2.1 测试用例

    测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:

    1.2.2 测试数据集

    测试使用人造数据和真实数据

    本测试用到的数据集规模
    名称vertex数目edge数目文件大小
    email-enron.txt36,691367,6614MB
    com-youtube.ungraph.txt1,157,8062,987,62438.7MB
    amazon0601.txt403,3933,387,38847.9MB

    1.3 服务配置

    graphdb-benchmark适配的Titan版本为0.5.4

    2 测试结果

    2.1 Batch插入性能

    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan9.51688.123111.586
    RocksDB2.34514.07616.636
    Cassandra11.930108.709101.959
    Memory3.07715.20413.841

    说明

    结论

    2.2 遍历性能

    2.2.1 术语说明
    2.2.2 FN性能
    Backendemail-enron(3.6w)amazon0601(40w)com-youtube.ungraph(120w)
    Titan7.72470.935128.884
    RocksDB8.87665.85263.388
    Cassandra13.125126.959102.580
    Memory22.309207.411165.609

    说明

    2.2.3 FA性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan7.11963.353115.633
    RocksDB6.03264.52652.721
    Cassandra9.410102.76694.197
    Memory12.340195.444140.89

    说明

    结论

    2.3 HugeGraph-图常用分析方法性能

    术语说明
    FS性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan11.3330.313376.06
    RocksDB44.3912.221268.792
    Cassandra39.8453.337331.113
    Memory35.6382.059388.987

    说明

    结论
    K-neighbor性能
    顶点深度一度二度三度四度五度六度
    v1时间0.031s0.033s0.048s0.500s11.27sOOM
    v111时间0.027s0.034s0.1151.36sOOM
    v1111时间0.039s0.027s0.052s0.511s10.96sOOM

    说明

    K-out性能
    顶点深度一度二度三度四度五度六度
    v1时间0.054s0.057s0.109s0.526s3.77sOOM
    10133245350,8301,128,688
    v111时间0.032s0.042s0.136s1.25s20.62sOOM
    1021149441131502,629,970
    v1111时间0.039s0.045s0.053s1.10s2.92sOOM
    101402555508251,070,230

    说明

    结论

    2.4 图综合性能测试-CW

    数据库规模1000规模5000规模10000规模20000
    Titan45.943849.1682737.1179791.46
    Memory(core)41.0771825.905**
    Cassandra(core)39.783862.7442423.1366564.191
    RocksDB(core)33.383199.894763.8691677.813

    说明

    结论

    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    + Print entire section

    1 测试环境

    1.1 硬件信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD

    1.2 软件信息

    1.2.1 测试用例

    测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:

    1.2.2 测试数据集

    测试使用人造数据和真实数据

    本测试用到的数据集规模
    名称vertex数目edge数目文件大小
    email-enron.txt36,691367,6614MB
    com-youtube.ungraph.txt1,157,8062,987,62438.7MB
    amazon0601.txt403,3933,387,38847.9MB

    1.3 服务配置

    graphdb-benchmark适配的Titan版本为0.5.4

    2 测试结果

    2.1 Batch插入性能

    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan9.51688.123111.586
    RocksDB2.34514.07616.636
    Cassandra11.930108.709101.959
    Memory3.07715.20413.841

    说明

    结论

    2.2 遍历性能

    2.2.1 术语说明
    2.2.2 FN性能
    Backendemail-enron(3.6w)amazon0601(40w)com-youtube.ungraph(120w)
    Titan7.72470.935128.884
    RocksDB8.87665.85263.388
    Cassandra13.125126.959102.580
    Memory22.309207.411165.609

    说明

    2.2.3 FA性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan7.11963.353115.633
    RocksDB6.03264.52652.721
    Cassandra9.410102.76694.197
    Memory12.340195.444140.89

    说明

    结论

    2.3 HugeGraph-图常用分析方法性能

    术语说明
    FS性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan11.3330.313376.06
    RocksDB44.3912.221268.792
    Cassandra39.8453.337331.113
    Memory35.6382.059388.987

    说明

    结论
    K-neighbor性能
    顶点深度一度二度三度四度五度六度
    v1时间0.031s0.033s0.048s0.500s11.27sOOM
    v111时间0.027s0.034s0.1151.36sOOM
    v1111时间0.039s0.027s0.052s0.511s10.96sOOM

    说明

    K-out性能
    顶点深度一度二度三度四度五度六度
    v1时间0.054s0.057s0.109s0.526s3.77sOOM
    10133245350,8301,128,688
    v111时间0.032s0.042s0.136s1.25s20.62sOOM
    1021149441131502,629,970
    v1111时间0.039s0.045s0.053s1.10s2.92sOOM
    101402555508251,070,230

    说明

    结论

    2.4 图综合性能测试-CW

    数据库规模1000规模5000规模10000规模20000
    Titan45.943849.1682737.1179791.46
    Memory(core)41.0771825.905**
    Cassandra(core)39.783862.7442423.1366564.191
    RocksDB(core)33.383199.894763.8691677.813

    说明

    结论

    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    diff --git a/cn/docs/performance/hugegraph-benchmark-0.5.6/index.html b/cn/docs/performance/hugegraph-benchmark-0.5.6/index.html index 7cb1bd854..3818bec0b 100644 --- a/cn/docs/performance/hugegraph-benchmark-0.5.6/index.html +++ b/cn/docs/performance/hugegraph-benchmark-0.5.6/index.html @@ -61,7 +61,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph BenchMark Performance

    1 测试环境

    1.1 硬件信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD

    1.2 软件信息

    1.2.1 测试用例

    测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:

    1.2.2 测试数据集

    测试使用人造数据和真实数据

    本测试用到的数据集规模
    名称vertex数目edge数目文件大小
    email-enron.txt36,691367,6614MB
    com-youtube.ungraph.txt1,157,8062,987,62438.7MB
    amazon0601.txt403,3933,387,38847.9MB
    com-lj.ungraph.txt399796134681189479MB

    1.3 服务配置

    graphdb-benchmark适配的Titan版本为0.5.4

    2 测试结果

    2.1 Batch插入性能

    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)com-lj.ungraph(3000w)
    HugeGraph0.6295.7115.24367.033
    Titan10.15108.569150.2661217.944
    Neo4j3.88418.93824.890281.537

    说明

    结论

    2.2 遍历性能

    2.2.1 术语说明
    2.2.2 FN性能
    Backendemail-enron(3.6w)amazon0601(40w)com-youtube.ungraph(120w)com-lj.ungraph(400w)
    HugeGraph4.07245.11866.006609.083
    Titan8.08492.507184.5431099.371
    Neo4j2.42410.53711.609106.919

    说明

    2.2.3 FA性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)com-lj.ungraph(3000w)
    HugeGraph1.54010.76411.243151.271
    Titan7.36193.344169.2181085.235
    Neo4j1.6734.7754.28440.507

    说明

    结论

    2.3 HugeGraph-图常用分析方法性能

    术语说明
    FS性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)com-lj.ungraph(3000w)
    HugeGraph0.4940.1033.3648.155
    Titan11.8180.239377.709575.678
    Neo4j1.7191.8001.9568.530

    说明

    结论
    K-neighbor性能
    顶点深度一度二度三度四度五度六度
    v1时间0.031s0.033s0.048s0.500s11.27sOOM
    v111时间0.027s0.034s0.1151.36sOOM
    v1111时间0.039s0.027s0.052s0.511s10.96sOOM

    说明

    K-out性能
    顶点深度一度二度三度四度五度六度
    v1时间0.054s0.057s0.109s0.526s3.77sOOM
    10133245350,8301,128,688
    v111时间0.032s0.042s0.136s1.25s20.62sOOM
    1021149441131502,629,970
    v1111时间0.039s0.045s0.053s1.10s2.92sOOM
    101402555508251,070,230

    说明

    结论

    2.4 图综合性能测试-CW

    数据库规模1000规模5000规模10000规模20000
    HugeGraph(core)20.804242.099744.7801700.547
    Titan45.790820.6332652.2359568.623
    Neo4j5.91350.267142.354460.880

    说明

    结论

    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    + Print entire section

    HugeGraph BenchMark Performance

    1 测试环境

    1.1 硬件信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD

    1.2 软件信息

    1.2.1 测试用例

    测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:

    1.2.2 测试数据集

    测试使用人造数据和真实数据

    本测试用到的数据集规模
    名称vertex数目edge数目文件大小
    email-enron.txt36,691367,6614MB
    com-youtube.ungraph.txt1,157,8062,987,62438.7MB
    amazon0601.txt403,3933,387,38847.9MB
    com-lj.ungraph.txt399796134681189479MB

    1.3 服务配置

    graphdb-benchmark适配的Titan版本为0.5.4

    2 测试结果

    2.1 Batch插入性能

    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)com-lj.ungraph(3000w)
    HugeGraph0.6295.7115.24367.033
    Titan10.15108.569150.2661217.944
    Neo4j3.88418.93824.890281.537

    说明

    结论

    2.2 遍历性能

    2.2.1 术语说明
    2.2.2 FN性能
    Backendemail-enron(3.6w)amazon0601(40w)com-youtube.ungraph(120w)com-lj.ungraph(400w)
    HugeGraph4.07245.11866.006609.083
    Titan8.08492.507184.5431099.371
    Neo4j2.42410.53711.609106.919

    说明

    2.2.3 FA性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)com-lj.ungraph(3000w)
    HugeGraph1.54010.76411.243151.271
    Titan7.36193.344169.2181085.235
    Neo4j1.6734.7754.28440.507

    说明

    结论

    2.3 HugeGraph-图常用分析方法性能

    术语说明
    FS性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)com-lj.ungraph(3000w)
    HugeGraph0.4940.1033.3648.155
    Titan11.8180.239377.709575.678
    Neo4j1.7191.8001.9568.530

    说明

    结论
    K-neighbor性能
    顶点深度一度二度三度四度五度六度
    v1时间0.031s0.033s0.048s0.500s11.27sOOM
    v111时间0.027s0.034s0.1151.36sOOM
    v1111时间0.039s0.027s0.052s0.511s10.96sOOM

    说明

    K-out性能
    顶点深度一度二度三度四度五度六度
    v1时间0.054s0.057s0.109s0.526s3.77sOOM
    10133245350,8301,128,688
    v111时间0.032s0.042s0.136s1.25s20.62sOOM
    1021149441131502,629,970
    v1111时间0.039s0.045s0.053s1.10s2.92sOOM
    101402555508251,070,230

    说明

    结论

    2.4 图综合性能测试-CW

    数据库规模1000规模5000规模10000规模20000
    HugeGraph(core)20.804242.099744.7801700.547
    Titan45.790820.6332652.2359568.623
    Neo4j5.91350.267142.354460.880

    说明

    结论

    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    diff --git a/cn/docs/performance/hugegraph-loader-performance/index.html b/cn/docs/performance/hugegraph-loader-performance/index.html index 6d47d3a4d..a06689847 100644 --- a/cn/docs/performance/hugegraph-loader-performance/index.html +++ b/cn/docs/performance/hugegraph-loader-performance/index.html @@ -19,7 +19,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph-Loader Performance

    使用场景

    当要批量插入的图数据(包括顶点和边)条数为billion级别及以下,或者总数据量小于TB时,可以采用HugeGraph-Loader工具持续、高速导入图数据

    性能

    测试均采用网址数据的边数据

    RocksDB单机性能

    Cassandra集群性能


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    HugeGraph-Loader Performance

    使用场景

    当要批量插入的图数据(包括顶点和边)条数为billion级别及以下,或者总数据量小于TB时,可以采用HugeGraph-Loader工具持续、高速导入图数据

    性能

    测试均采用网址数据的边数据

    RocksDB单机性能

    Cassandra集群性能


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/performance/index.html b/cn/docs/performance/index.html index 50ee03b82..da1b465c7 100644 --- a/cn/docs/performance/index.html +++ b/cn/docs/performance/index.html @@ -4,7 +4,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    PERFORMANCE


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    PERFORMANCE


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/quickstart/_print/index.html b/cn/docs/quickstart/_print/index.html index 1fe071636..2f89ec5e5 100644 --- a/cn/docs/quickstart/_print/index.html +++ b/cn/docs/quickstart/_print/index.html @@ -1343,7 +1343,7 @@ # 注意: 诊断日志仅在作业失败时存在,并且只会保存一小时。 kubectl get event --field-selector reason=ComputerJobFailed --field-selector involvedObject.name=pagerank-sample -n hugegraph-computer-system

    2.2.8 显示作业的成功事件

    NOTE: it will only be saved for one hour

    kubectl get event --field-selector reason=ComputerJobSucceed --field-selector involvedObject.name=pagerank-sample -n hugegraph-computer-system
    -

    2.2.9 查询算法结果

    如果输出到 Hugegraph-Server 则与 Locally 模式一致,如果输出到 HDFS ,请检查 hugegraph-computerresults{jobId}目录下的结果文件。

    3 内置算法文档

    3.1 支持的算法列表:

    中心性算法:
    社区算法:
    路径算法:

    更多算法请看: Built-In algorithms

    3.2 算法描述

    TODO

    4 算法开发指南

    TODO

    +

    2.2.9 查询算法结果

    如果输出到 Hugegraph-Server 则与 Locally 模式一致,如果输出到 HDFS ,请检查 hugegraph-computerresults{jobId}目录下的结果文件。

    3 内置算法文档

    3.1 支持的算法列表:

    中心性算法:
    社区算法:
    路径算法:

    更多算法请看: Built-In algorithms

    3.2 算法描述

    TODO

    4 算法开发指南

    TODO

    diff --git a/cn/docs/quickstart/hugegraph-client/index.html b/cn/docs/quickstart/hugegraph-client/index.html index 1f7fcf078..dc922bc83 100644 --- a/cn/docs/quickstart/hugegraph-client/index.html +++ b/cn/docs/quickstart/hugegraph-client/index.html @@ -309,7 +309,7 @@ hugeClient.close(); } } -

    4.4 运行Example

    运行Example之前需要启动Server, 启动过程见HugeGraph-Server Quick Start

    4.5 Example示例说明

    示例说明见HugeGraph-Client基本API介绍


    Last modified April 17, 2022: rebuild doc (ef36544)
    +

    4.4 运行Example

    运行Example之前需要启动Server, 启动过程见HugeGraph-Server Quick Start

    4.5 Example示例说明

    示例说明见HugeGraph-Client基本API介绍


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/quickstart/hugegraph-computer/index.html b/cn/docs/quickstart/hugegraph-computer/index.html index 4a5359d89..102bfb4c9 100644 --- a/cn/docs/quickstart/hugegraph-computer/index.html +++ b/cn/docs/quickstart/hugegraph-computer/index.html @@ -86,7 +86,7 @@ # 注意: 诊断日志仅在作业失败时存在,并且只会保存一小时。 kubectl get event --field-selector reason=ComputerJobFailed --field-selector involvedObject.name=pagerank-sample -n hugegraph-computer-system

    2.2.8 显示作业的成功事件

    NOTE: it will only be saved for one hour

    kubectl get event --field-selector reason=ComputerJobSucceed --field-selector involvedObject.name=pagerank-sample -n hugegraph-computer-system
    -

    2.2.9 查询算法结果

    如果输出到 Hugegraph-Server 则与 Locally 模式一致,如果输出到 HDFS ,请检查 hugegraph-computerresults{jobId}目录下的结果文件。

    3 内置算法文档

    3.1 支持的算法列表:

    中心性算法:
    社区算法:
    路径算法:

    更多算法请看: Built-In algorithms

    3.2 算法描述

    TODO

    4 算法开发指南

    TODO


    Last modified November 28, 2022: improve computer doc (#157) (862b048)
    +

    2.2.9 查询算法结果

    如果输出到 Hugegraph-Server 则与 Locally 模式一致,如果输出到 HDFS ,请检查 hugegraph-computerresults{jobId}目录下的结果文件。

    3 内置算法文档

    3.1 支持的算法列表:

    中心性算法:
    社区算法:
    路径算法:

    更多算法请看: Built-In algorithms

    3.2 算法描述

    TODO

    4 算法开发指南

    TODO


    Last modified November 28, 2022: improve computer doc (#157) (862b048)
    diff --git a/cn/docs/quickstart/hugegraph-hubble/index.html b/cn/docs/quickstart/hugegraph-hubble/index.html index 996eb7b1f..44b35c80d 100644 --- a/cn/docs/quickstart/hugegraph-hubble/index.html +++ b/cn/docs/quickstart/hugegraph-hubble/index.html @@ -64,7 +64,7 @@ Create project issue Print entire section

    HugeGraph-Hubble Quick Start

    1 HugeGraph-Hubble概述

    HugeGraph是一款面向分析型,支持批量操作的图数据库系统,它由百度安全团队自主研发,全面支持Apache TinkerPop3框架和Gremlin图查询语言,提供导出、备份、恢复等完善的工具链生态,有效解决海量图数据的存储、查询和关联分析需求。HugeGraph广泛应用于银行券商的风控打击、保险理赔、推荐搜索、公安犯罪打击、知识图谱构建、网络安全、IT运维等领域,致力于让更多行业、组织及用户享受到更广泛的数据综合价值。

    HugeGraph-Hubble 是HugeGraph的一站式可视化分析平台,平台涵盖了从数据建模,到数据快速导入,再到数据的在线、离线分析、以及图的统一管理的全过程,实现了图应用的全流程向导式操作,旨在提升用户的使用流畅度,降低用户的使用门槛,提供更为高效易用的使用体验。

    平台主要包括以下模块:

    图管理

    图管理模块通过图的创建,连接平台与图数据,实现多图的统一管理,并实现图的访问、编辑、删除、查询操作。

    元数据建模

    元数据建模模块通过创建属性库,顶点类型,边类型,索引类型,实现图模型的构建与管理,平台提供两种模式,列表模式和图模式,可实时展示元数据模型,更加直观。同时还提供了跨图的元数据复用功能,省去相同元数据繁琐的重复创建过程,极大地提升建模效率,增强易用性。

    数据导入

    数据导入是将用户的业务数据转化为图的顶点和边并插入图数据库中,平台提供了向导式的可视化导入模块,通过创建导入任务,实现导入任务的管理及多个导入任务的并行运行,提高导入效能。进入导入任务后,只需跟随平台步骤提示,按需上传文件,填写内容,就可轻松实现图数据的导入过程,同时支持断点续传,错误重试机制等,降低导入成本,提升效率。

    图分析

    通过输入图遍历语言Gremlin可实现图数据的高性能通用分析,并提供顶点的定制化多维路径查询等功能,提供3种图结果展示方式,包括:图形式、表格形式、Json形式,多维度展示数据形态,满足用户使用的多种场景需求。提供运行记录及常用语句收藏等功能,实现图操作的可追溯,以及查询输入的复用共享,快捷高效。支持图数据的导出,导出格式为Json格式。

    任务管理

    对于需要遍历全图的Gremlin任务,索引的创建与重建等耗时较长的异步任务,平台提供相应的任务管理功能,实现异步任务的统一的管理与结果查看。

    2 平台使用流程

    平台的模块使用流程如下:

    image

    3 平台使用说明

    3.1 图管理

    3.1.1 图创建

    图管理模块下,点击【创建图】,通过填写图ID、图名称、主机名、端口号、用户名、密码的信息,实现多图的连接。

    image

    创建图填写内容如下:

    image
    3.1.2 图访问

    实现图空间的信息访问,进入后,可进行图的多维查询分析、元数据管理、数据导入、算法分析等操作。

    image
    3.1.3 图管理
    1. 用户通过对图的概览、搜索以及单图的信息编辑与删除,实现图的统一管理。
    2. 搜索范围:可对图名称和ID进行搜索。
    image

    3.2 元数据建模(列表+图模式)

    3.2.1 模块入口

    左侧导航处:

    image
    3.2.2 属性类型
    3.2.2.1 创建
    1. 填写或选择属性名称、数据类型、基数,完成属性的创建。
    2. 创建的属性可作为顶点类型和边类型的属性。

    列表模式:

    image

    图模式:

    image
    3.2.2.2 复用
    1. 平台提供【复用】功能,可直接复用其他图的元数据。
    2. 选择需要复用的图ID,继续选择需要复用的属性,之后平台会进行是否冲突的校验,通过后,可实现元数据的复用。

    选择复用项:

    image

    校验复用项:

    image
    3.2.2.3 管理
    1. 在属性列表中可进行单条删除或批量删除操作。
    3.2.3 顶点类型
    3.2.3.1 创建
    1. 填写或选择顶点类型名称、ID策略、关联属性、主键属性,顶点样式、查询结果中顶点下方展示的内容,以及索引的信息:包括是否创建类型索引,及属性索引的具体内容,完成顶点类型的创建。

    列表模式:

    image

    图模式:

    image
    3.2.3.2 复用
    1. 顶点类型的复用,会将此类型关联的属性和属性索引一并复用。
    2. 复用功能使用方法类似属性的复用,见3.2.2.2。
    3.2.3.3 管理
    1. 可进行编辑操作,顶点样式、关联类型、顶点展示内容、属性索引可编辑,其余不可编辑。

    2. 可进行单条删除或批量删除操作。

    image
    3.2.4 边类型
    3.2.4.1 创建
    1. 填写或选择边类型名称、起点类型、终点类型、关联属性、是否允许多次连接、边样式、查询结果中边下方展示的内容,以及索引的信息:包括是否创建类型索引,及属性索引的具体内容,完成边类型的创建。

    列表模式:

    image

    图模式:

    image
    3.2.4.2 复用
    1. 边类型的复用,会将此类型的起点类型、终点类型、关联的属性和属性索引一并复用。
    2. 复用功能使用方法类似属性的复用,见3.2.2.2。
    3.2.4.3 管理
    1. 可进行编辑操作,边样式、关联属性、边展示内容、属性索引可编辑,其余不可编辑,同顶点类型。
    2. 可进行单条删除或批量删除操作。
    3.2.5 索引类型

    展示顶点类型和边类型的顶点索引和边索引。

    3.3 数据导入

    数据导入的使用流程如下:

    image
    3.3.1 模块入口

    左侧导航处:

    image
    3.3.2 创建任务
    1. 填写任务名称和备注(非必填),可以创建导入任务。
    2. 可创建多个导入任务,并行导入。
    image
    3.3.3 上传文件
    1. 上传需要构图的文件,目前支持的格式为CSV,后续会不断更新。
    2. 可同时上传多个文件。
    image
    3.3.4 设置数据映射
    1. 对上传的文件分别设置数据映射,包括文件设置和类型设置

    2. 文件设置:勾选或填写是否包含表头、分隔符、编码格式等文件本身的设置内容,均设置默认值,无需手动填写

    3. 类型设置:

      1. 顶点映射和边映射:

        【顶点类型】 :选择顶点类型,并为其ID映射上传文件中列数据;

        【边类型】:选择边类型,为其起点类型和终点类型的ID列映射上传文件的列数据;

      2. 映射设置:为选定的顶点类型的属性映射上传文件中的列数据,此处,若属性名称与文件的表头名称一致,可自动匹配映射属性,无需手动填选

      3. 完成设置后,显示设置列表,方可进行下一步操作,支持映射的新增、编辑、删除操作

    设置映射的填写内容:

    image

    映射列表:

    image
    3.3.5 导入数据

    导入前需要填写导入设置参数,填写完成后,可开始向图库中导入数据

    1. 导入设置
    image
    1. 导入详情
    image

    3.4 数据分析

    3.4.1 模块入口

    左侧导航处:

    image
    3.4.2 多图切换

    通过左侧切换入口,灵活切换多图的操作空间

    image
    3.4.3 图分析与处理

    HugeGraph支持Apache TinkerPop3的图遍历查询语言Gremlin,Gremlin是一种通用的图数据库查询语言,通过输入Gremlin语句,点击执行,即可执行图数据的查询分析操作,并可实现顶点/边的创建及删除、顶点/边的属性修改等。

    Gremlin查询后,下方为图结果展示区域,提供3种图结果展示方式,分别为:【图模式】、【表格模式】、【Json模式】。

    支持缩放、居中、全屏、导出等操作。

    【图模式】

    image

    【表格模式】

    image

    【Json模式】

    image
    3.4.4 数据详情

    点击顶点/边实体,可查看顶点/边的数据详情,包括:顶点/边类型,顶点ID,属性及对应值,拓展图的信息展示维度,提高易用性。

    3.4.5 图结果的多维路径查询

    除了全局的查询外,可针对查询结果中的顶点进行深度定制化查询以及隐藏操作,实现图结果的定制化挖掘。

    右击顶点,出现顶点的菜单入口,可进行展示、查询、隐藏等操作。

    双击顶点,也可展示与选中点关联的顶点。

    image
    3.4.6 新增顶点/边
    3.4.6.1 新增顶点

    在图区可通过两个入口,动态新增顶点,如下:

    1. 点击图区面板,出现添加顶点入口
    2. 点击右上角的操作栏中的首个图标

    通过选择或填写顶点类型、ID值、属性信息,完成顶点的增加。

    入口如下:

    image

    添加顶点内容如下:

    image
    3.4.6.2 新增边

    右击图结果中的顶点,可增加该点的出边或者入边。

    3.4.7 执行记录与收藏的查询
    1. 图区下方记载每次查询记录,包括:查询时间、执行类型、内容、状态、耗时、以及【收藏】和【加载】操作,实现图执行的全方位记录,有迹可循,并可对执行内容快速加载复用
    2. 提供语句的收藏功能,可对常用语句进行收藏操作,方便高频语句快速调用
    image

    3.5 任务管理

    3.5.1 模块入口

    左侧导航处:

    image
    3.5.2 任务管理
    1. 提供异步任务的统一的管理与结果查看,异步任务包括4类,分别为:
    1. 列表显示当前图的异步任务信息,包括:任务ID,任务名称,任务类型,创建时间,耗时,状态,操作,实现对异步任务的管理。
    2. 支持对任务类型和状态进行筛选
    3. 支持搜索任务ID和任务名称
    4. 可对异步任务进行删除或批量删除操作
    image
    3.5.3 Gremlin异步任务

    1.创建任务

    image

    点击查看入口,跳转到任务管理列表,如下:

    image

    4.查看结果

    3.5.4 OLAP算法任务

    Hubble上暂未提供可视化的OLAP算法执行,可调用RESTful API进行OLAP类算法任务,在任务管理中通过ID找到相应任务,查看进度与结果等。

    3.5.5 删除元数据、重建索引

    1.创建任务

    image
    image

    2.任务详情

    image

    Last modified April 17, 2022: rebuild doc (ef36544)
    +3.任务详情
  • 提供【查看】入口,可跳转到任务详情查看当前任务具体执行情况跳转到任务中心后,直接显示当前执行的任务行
  • image

    点击查看入口,跳转到任务管理列表,如下:

    image

    4.查看结果

    3.5.4 OLAP算法任务

    Hubble上暂未提供可视化的OLAP算法执行,可调用RESTful API进行OLAP类算法任务,在任务管理中通过ID找到相应任务,查看进度与结果等。

    3.5.5 删除元数据、重建索引

    1.创建任务

    image
    image

    2.任务详情

    image

    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/quickstart/hugegraph-loader/index.html b/cn/docs/quickstart/hugegraph-loader/index.html index dd2c1f1fd..4bda4971d 100644 --- a/cn/docs/quickstart/hugegraph-loader/index.html +++ b/cn/docs/quickstart/hugegraph-loader/index.html @@ -503,7 +503,7 @@ --deploy-mode cluster --name spark-hugegraph-loader --file ./hugegraph.json \ --username admin --token admin --host xx.xx.xx.xx --port 8093 \ --graph graph-test --num-executors 6 --executor-cores 16 --executor-memory 15g -
    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    +
    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    diff --git a/cn/docs/quickstart/hugegraph-server/index.html b/cn/docs/quickstart/hugegraph-server/index.html index 914490edd..57e9f8274 100644 --- a/cn/docs/quickstart/hugegraph-server/index.html +++ b/cn/docs/quickstart/hugegraph-server/index.html @@ -210,7 +210,7 @@ }

    详细的API请参考RESTful-API文档

    7 停止Server

    $cd hugegraph-${version}
     $bin/stop-hugegraph.sh
    -

    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    +
    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    diff --git a/cn/docs/quickstart/hugegraph-tools/index.html b/cn/docs/quickstart/hugegraph-tools/index.html index ed3921871..4ce7d8865 100644 --- a/cn/docs/quickstart/hugegraph-tools/index.html +++ b/cn/docs/quickstart/hugegraph-tools/index.html @@ -383,7 +383,7 @@ # 恢复图模式 ./bin/hugegraph --url http://127.0.0.1:8080 --graph hugegraph graph-mode-set -m NONE
    8. 图迁移
    ./bin/hugegraph --url http://127.0.0.1:8080 --graph hugegraph migrate --target-url http://127.0.0.1:8090 --target-graph hugegraph
    -

    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    +
    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    diff --git a/cn/docs/quickstart/index.html b/cn/docs/quickstart/index.html index 457bfe4c4..eb379cdcd 100644 --- a/cn/docs/quickstart/index.html +++ b/cn/docs/quickstart/index.html @@ -4,7 +4,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    Quick Start


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    Quick Start


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/cn/docs/summary/index.html b/cn/docs/summary/index.html index f15b5bcb1..a765b9d4d 100644 --- a/cn/docs/summary/index.html +++ b/cn/docs/summary/index.html @@ -13,7 +13,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph Docs

    Quickstart

    Config

    API

    Guides

    Query Language

    Performance

    ChangeLogs


    Last modified November 27, 2022: Add HugeGraph-Computer Doc (#155) (19ab2ff)
    + Print entire section

    HugeGraph Docs

    Quickstart

    Config

    API

    Guides

    Query Language

    Performance

    ChangeLogs


    Last modified November 27, 2022: Add HugeGraph-Computer Doc (#155) (19ab2ff)
    diff --git a/cn/index.html b/cn/index.html index 9ccaabb5e..d687e6457 100644 --- a/cn/index.html +++ b/cn/index.html @@ -18,7 +18,7 @@

    Apache HugeGraph

                        Incubating

    Learn More -Download

    HugeGraph是一款易用、高效、通用的图数据库

    实现了Apache TinkerPop3框架、兼容Gremlin查询语言。

    HugeGraph支持百亿以上的顶点(Vertex)和边(Edge)快速导入,毫秒级的关联查询能力,并可与Hadoop、Spark等

    大数据平台集成以进行离线分析,主要应用场景包括关联分析、欺诈检测和知识图谱等。

    易用

    支持Gremlin图查询语言与RESTful API,并提供图检索常用接口,具备齐全的周边工具,支持分布式存储、数据多副本及横向扩容,内置多种后端存储引擎,轻松实现各种查询、分析。

    高效

    在图存储和图计算方面做了深度优化,提供支持多种数据源的批量导入工具,轻松完成百亿级数据快速导入,通过优化过的查询达到图检索的毫秒级响应,支持数千用户并发的在线实时操作。

    通用

    支持Apache Gremlin标准图查询语言和Property Graph标准图建模方法,支持基于图的OLTP和OLAP方案。集成Apache Hadoop及Apache Spark大数据平台,也可插件式轻松扩展后端存储引擎。

    Apache 的第一个图数据库项目

    使用易用的工具链

    可从获取图数据导入工具, 可视化界面以及备份还原迁移工具, 欢迎使用

    参与开源

    我们可以在 Github 上提交 Pull Request. 热烈欢迎大家加入!

    Read more …

    关注微信

    关注微信公众号 “HugeGraph”

    (推特正在路上…)

    Read more …

    欢迎大家参与 HugeGraph 的任何贡献

    +Download

    HugeGraph是一款易用、高效、通用的图数据库

    实现了Apache TinkerPop3框架、兼容Gremlin查询语言。

    HugeGraph支持百亿以上的顶点(Vertex)和边(Edge)快速导入,毫秒级的关联查询能力,并可与Hadoop、Spark等

    大数据平台集成以进行离线分析,主要应用场景包括关联分析、欺诈检测和知识图谱等。

    易用

    支持Gremlin图查询语言与RESTful API,并提供图检索常用接口,具备齐全的周边工具,支持分布式存储、数据多副本及横向扩容,内置多种后端存储引擎,轻松实现各种查询、分析。

    高效

    在图存储和图计算方面做了深度优化,提供支持多种数据源的批量导入工具,轻松完成百亿级数据快速导入,通过优化过的查询达到图检索的毫秒级响应,支持数千用户并发的在线实时操作。

    通用

    支持Apache Gremlin标准图查询语言和Property Graph标准图建模方法,支持基于图的OLTP和OLAP方案。集成Apache Hadoop及Apache Spark大数据平台,也可插件式轻松扩展后端存储引擎。

    Apache 的第一个图数据库项目

    使用易用的工具链

    可从获取图数据导入工具, 可视化界面以及备份还原迁移工具, 欢迎使用

    参与开源

    我们可以在 Github 上提交 Pull Request. 热烈欢迎大家加入!

    Read more …

    关注微信

    关注微信公众号 “HugeGraph”

    (推特正在路上…)

    Read more …

    欢迎大家参与 HugeGraph 的任何贡献

    Apache EventMesh is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.

    diff --git a/cn/search/index.html b/cn/search/index.html index 0ff879e3b..7193c230c 100644 --- a/cn/search/index.html +++ b/cn/search/index.html @@ -1,5 +1,5 @@ Search Results | HugeGraph -

    Search Results

    +

    Search Results

    diff --git a/cn/sitemap.xml b/cn/sitemap.xml index 5cd7d7873..0acb099e2 100644 --- a/cn/sitemap.xml +++ b/cn/sitemap.xml @@ -1 +1 @@ -/cn/docs/guides/architectural/2022-11-27T21:05:55+08:00/cn/docs/config/config-guide/2022-04-17T11:36:55+08:00/cn/docs/language/hugegraph-gremlin/2022-09-15T15:16:23+08:00/cn/docs/performance/hugegraph-benchmark-0.5.6/2022-09-15T15:16:23+08:00/cn/docs/quickstart/hugegraph-server/2022-09-15T15:16:23+08:00/cn/docs/introduction/readme/2022-11-27T21:36:10+08:00/cn/docs/changelog/hugegraph-0.12.0-release-notes/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/schema/2022-04-17T11:36:55+08:00/cn/docs/performance/api-preformance/hugegraph-api-0.5.6-rocksdb/2022-04-17T11:36:55+08:00/cn/docs/config/config-option/2022-09-15T15:16:23+08:00/cn/docs/guides/desgin-concept/2022-04-17T11:36:55+08:00/cn/docs/download/download/2022-09-15T15:16:23+08:00/cn/docs/language/hugegraph-example/2022-09-15T15:16:23+08:00/cn/docs/clients/hugegraph-client/2022-09-15T15:16:23+08:00/cn/docs/performance/api-preformance/2022-04-17T11:36:55+08:00/cn/docs/quickstart/hugegraph-loader/2022-09-15T15:16:23+08:00/cn/docs/clients/restful-api/propertykey/2022-05-12T21:24:05+08:00/cn/docs/changelog/hugegraph-0.11.2-release-notes/2022-04-17T11:36:55+08:00/cn/docs/performance/api-preformance/hugegraph-api-0.5.6-cassandra/2022-04-17T11:36:55+08:00/cn/docs/config/config-authentication/2022-04-17T11:36:55+08:00/cn/docs/clients/gremlin-console/2022-04-17T11:36:55+08:00/cn/docs/guides/custom-plugin/2022-09-15T15:16:23+08:00/cn/docs/performance/hugegraph-loader-performance/2022-04-17T11:36:55+08:00/cn/docs/quickstart/hugegraph-tools/2022-09-15T15:16:23+08:00/cn/docs/quickstart/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.10.4-release-notes/2022-04-17T11:36:55+08:00/cn/docs/performance/api-preformance/hugegraph-api-0.4.4/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/vertexlabel/2022-04-17T11:36:55+08:00/cn/docs/guides/backup-restore/2022-04-17T11:36:55+08:00/cn/docs/config/2022-04-17T11:36:55+08:00/cn/docs/config/config-https/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/edgelabel/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.9.2-release-notes/2022-04-17T11:36:55+08:00/cn/docs/performance/api-preformance/hugegraph-api-0.2/2022-04-17T11:36:55+08:00/cn/docs/quickstart/hugegraph-hubble/2022-04-17T11:36:55+08:00/cn/docs/clients/2022-04-17T11:36:55+08:00/cn/docs/config/config-computer/2022-11-28T10:57:39+08:00/cn/docs/guides/faq/2022-09-15T15:16:23+08:00/cn/docs/clients/restful-api/indexlabel/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.8.0-release-notes/2022-04-17T11:36:55+08:00/cn/docs/quickstart/hugegraph-client/2022-04-17T11:36:55+08:00/cn/docs/guides/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/rebuild/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.7.4-release-notes/2022-04-17T11:36:55+08:00/cn/docs/quickstart/hugegraph-computer/2022-11-28T10:57:39+08:00/cn/docs/language/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.6.1-release-notes/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/vertex/2022-09-15T15:16:23+08:00/cn/docs/clients/restful-api/edge/2022-09-15T15:16:23+08:00/cn/docs/performance/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.5.6-release-notes/2022-04-17T11:36:55+08:00/cn/docs/changelog/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.4.4-release-notes/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/traverser/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/rank/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.3.3-release-notes/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.2-release-notes/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/variable/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/graphs/2022-05-27T09:27:37+08:00/cn/docs/changelog/hugegraph-0.2.4-release-notes/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/task/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/gremlin/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/auth/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/other/2022-04-17T11:36:55+08:00/cn/docs/2022-09-15T15:16:23+08:00/cn/blog/news/2022-04-17T11:36:55+08:00/cn/blog/releases/2022-04-17T11:36:55+08:00/cn/blog/2018/10/06/easy-documentation-with-docsy/2022-04-17T11:36:55+08:00/cn/blog/2018/10/06/the-second-blog-post/2022-04-17T11:36:55+08:00/cn/blog/2018/01/04/another-great-release/2022-04-17T11:36:55+08:00/cn/docs/cla/2022-04-17T11:36:55+08:00/cn/docs/performance/hugegraph-benchmark-0.4.4/2022-09-15T15:16:23+08:00/cn/docs/summary/2022-11-27T21:05:55+08:00/cn/about/2022-04-17T11:36:55+08:00/cn/blog/2022-04-17T11:36:55+08:00/cn/categories//cn/community/2022-04-17T11:36:55+08:00/cn/2022-05-11T21:17:34+08:00/cn/search/2022-04-17T11:36:55+08:00/cn/tags/ \ No newline at end of file +/cn/docs/guides/architectural/2022-11-27T21:05:55+08:00/cn/docs/config/config-guide/2022-04-17T11:36:55+08:00/cn/docs/language/hugegraph-gremlin/2022-09-15T15:16:23+08:00/cn/docs/performance/hugegraph-benchmark-0.5.6/2022-09-15T15:16:23+08:00/cn/docs/quickstart/hugegraph-server/2022-09-15T15:16:23+08:00/cn/docs/introduction/readme/2022-11-27T21:36:10+08:00/cn/docs/changelog/hugegraph-0.12.0-release-notes/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/schema/2022-04-17T11:36:55+08:00/cn/docs/performance/api-preformance/hugegraph-api-0.5.6-rocksdb/2022-04-17T11:36:55+08:00/cn/docs/config/config-option/2022-09-15T15:16:23+08:00/cn/docs/guides/desgin-concept/2022-04-17T11:36:55+08:00/cn/docs/download/download/2022-09-15T15:16:23+08:00/cn/docs/language/hugegraph-example/2022-09-15T15:16:23+08:00/cn/docs/clients/hugegraph-client/2022-09-15T15:16:23+08:00/cn/docs/performance/api-preformance/2022-04-17T11:36:55+08:00/cn/docs/quickstart/hugegraph-loader/2022-09-15T15:16:23+08:00/cn/docs/clients/restful-api/propertykey/2022-05-12T21:24:05+08:00/cn/docs/changelog/hugegraph-0.11.2-release-notes/2022-04-17T11:36:55+08:00/cn/docs/performance/api-preformance/hugegraph-api-0.5.6-cassandra/2022-04-17T11:36:55+08:00/cn/docs/config/config-authentication/2022-04-17T11:36:55+08:00/cn/docs/clients/gremlin-console/2022-04-17T11:36:55+08:00/cn/docs/guides/custom-plugin/2022-09-15T15:16:23+08:00/cn/docs/performance/hugegraph-loader-performance/2022-04-17T11:36:55+08:00/cn/docs/quickstart/hugegraph-tools/2022-09-15T15:16:23+08:00/cn/docs/quickstart/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.10.4-release-notes/2022-04-17T11:36:55+08:00/cn/docs/performance/api-preformance/hugegraph-api-0.4.4/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/vertexlabel/2022-04-17T11:36:55+08:00/cn/docs/guides/backup-restore/2022-04-17T11:36:55+08:00/cn/docs/config/2022-04-17T11:36:55+08:00/cn/docs/config/config-https/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/edgelabel/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.9.2-release-notes/2022-04-17T11:36:55+08:00/cn/docs/performance/api-preformance/hugegraph-api-0.2/2022-04-17T11:36:55+08:00/cn/docs/quickstart/hugegraph-hubble/2022-04-17T11:36:55+08:00/cn/docs/clients/2022-04-17T11:36:55+08:00/cn/docs/config/config-computer/2022-11-28T10:57:39+08:00/cn/docs/guides/faq/2022-09-15T15:16:23+08:00/cn/docs/clients/restful-api/indexlabel/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.8.0-release-notes/2022-04-17T11:36:55+08:00/cn/docs/quickstart/hugegraph-client/2022-04-17T11:36:55+08:00/cn/docs/guides/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/rebuild/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.7.4-release-notes/2022-04-17T11:36:55+08:00/cn/docs/quickstart/hugegraph-computer/2022-11-28T10:57:39+08:00/cn/docs/language/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.6.1-release-notes/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/vertex/2022-09-15T15:16:23+08:00/cn/docs/clients/restful-api/edge/2022-09-15T15:16:23+08:00/cn/docs/performance/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.5.6-release-notes/2022-04-17T11:36:55+08:00/cn/docs/changelog/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.4.4-release-notes/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/traverser/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/rank/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.3.3-release-notes/2022-04-17T11:36:55+08:00/cn/docs/changelog/hugegraph-0.2-release-notes/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/variable/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/graphs/2022-05-27T09:27:37+08:00/cn/docs/changelog/hugegraph-0.2.4-release-notes/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/task/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/gremlin/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/auth/2022-04-17T11:36:55+08:00/cn/docs/clients/restful-api/other/2022-04-17T11:36:55+08:00/cn/docs/2022-09-15T15:16:23+08:00/cn/blog/news/2022-04-17T11:36:55+08:00/cn/blog/releases/2022-04-17T11:36:55+08:00/cn/blog/2018/10/06/easy-documentation-with-docsy/2022-04-17T11:36:55+08:00/cn/blog/2018/10/06/the-second-blog-post/2022-04-17T11:36:55+08:00/cn/blog/2018/01/04/another-great-release/2022-04-17T11:36:55+08:00/cn/docs/cla/2022-04-17T11:36:55+08:00/cn/docs/performance/hugegraph-benchmark-0.4.4/2022-09-15T15:16:23+08:00/cn/docs/summary/2022-11-27T21:05:55+08:00/cn/about/2022-04-17T11:36:55+08:00/cn/blog/2022-04-17T11:36:55+08:00/cn/categories//cn/community/2022-04-17T11:36:55+08:00/cn/2022-12-12T18:18:56+08:00/cn/search/2022-04-17T11:36:55+08:00/cn/tags/ \ No newline at end of file diff --git a/cn/tags/index.html b/cn/tags/index.html index 04f891a6d..5f34711a5 100644 --- a/cn/tags/index.html +++ b/cn/tags/index.html @@ -1,5 +1,5 @@ Tags | HugeGraph -

    Tags

    +

    Tags

    diff --git a/community/_print/index.html b/community/_print/index.html index db38e68b5..7ba4c9297 100644 --- a/community/_print/index.html +++ b/community/_print/index.html @@ -1,6 +1,6 @@ Community | HugeGraph -

    Join the HugeGraph community

    HugeGraph is an open source project that anyone in the community can use, improve, and enjoy. We'd love you to join us! Here's a few ways to find out what's happening and get involved.

    +

    Join the HugeGraph community

    HugeGraph is an open source project that anyone in the community can use, improve, and enjoy. We'd love you to join us! Here's a few ways to find out what's happening and get involved.

    diff --git a/community/index.html b/community/index.html index cf8b514f1..caf0b2951 100644 --- a/community/index.html +++ b/community/index.html @@ -1,6 +1,6 @@ Community | HugeGraph -

    Join the HugeGraph community

    HugeGraph is an open source project that anyone in the community can use, improve, and enjoy. We'd love you to join us! Here's a few ways to find out what's happening and get involved.

    +

    Join the HugeGraph community

    HugeGraph is an open source project that anyone in the community can use, improve, and enjoy. We'd love you to join us! Here's a few ways to find out what's happening and get involved.

    diff --git a/docs/_print/index.html b/docs/_print/index.html index 5ff96965b..dc17f62e0 100644 --- a/docs/_print/index.html +++ b/docs/_print/index.html @@ -6593,7 +6593,7 @@ git rebase -i master

    And push it to GitHub fork repo again:

    # force push the local commit to fork repo
     git push -f origin bugfix-branch:bugfix-branch
    -

    GitHub will automatically update the Pull Request after we push it, just wait for code review.

    9.2 - Subscribe Mailing Lists

    It is highly recommended to subscribe to the development mailing list to keep up-to-date with the community.

    In the process of using HugeGraph, if you have any questions or ideas, suggestions, you can participate in the HugeGraph community building through the Apache mailing list. Sending a subscription email is also very simple, the steps are as follows:

    1. Email dev-subscribe@hugegraph.apache.org with your own email address, subject and content are arbitrary.

    2. Receive confirmation email and reply. After completing step 1, you will receive a confirmation email from dev-help@hugegraph.apache.org (if not received, please confirm whether the email is automatically classified as spam, promotion email, subscription email, etc.) . Then reply directly to the email, or click on the link in the email to reply quickly, the subject and content are arbitrary.

    3. Receive a welcome email. After completing the above steps, you will receive a welcome email with the subject WELCOME to dev@hugegraph.apache.org, and you have successfully subscribed to the Apache HugeGraph mailing list.

    Unsubscribe Mailing Lists

    If you do not need to know what’s going on with HugeGraph, you can unsubscribe from the mailing list.

    Unsubscribe from the mailing list steps are as follows:

    1. Email dev-unsubscribe@hugegraph.apache.org with your subscribed email address, subject and content are arbitrary.

    2. Receive confirmation email and reply. After completing step 1, you will receive a confirmation email from dev-help@hugegraph.apache.org (if not received, please confirm whether the email is automatically classified as spam, promotion email, subscription email, etc.) . Then reply directly to the email, or click on the link in the email to reply quickly, the subject and content are arbitrary.

    3. Receive a goodbye email. After completing the above steps, you will receive a goodbye email with the subject GOODBYE from dev@hugegraph.apache.org, and you have successfully unsubscribed to the Apache HugeGraph mailing list, and you will not receive emails from dev@hugegraph.apache.org.

    10 - CHANGELOGS

    10.1 - HugeGraph 0.12 Release Notes

    API & Client

    接口更新

    其它修改

    Core & Server

    功能更新

    BUG修复

    配置项修改:

    其它修改

    Loader

    Tools

    11 -

    Contributor Agreement

    Individual Contributor exclusive License Agreement

    (including the TRADITIONAL PATENT LICENSE OPTION)

    Thank you for your interest in contributing to HugeGraph’s all projects (“We” or “Us”).

    The purpose of this contributor agreement (“Agreement”) is to clarify and document the rights granted by contributors to Us. To make this document effective, please follow the comment of GitHub CLA-Assistant when submitting a new pull request.

    How to use this Contributor Agreement

    If You are an employee and have created the Contribution as part of your employment, You need to have Your employer approve this Agreement or sign the Entity version of this document. If You do not own the Copyright in the entire work of authorship, any other author of the Contribution should also sign this – in any event, please contact Us at hugegraph@googlegroups.com

    1. Definitions

    “You” means the individual Copyright owner who Submits a Contribution to Us.

    “Contribution” means any original work of authorship, including any original modifications or additions to an existing work of authorship, Submitted by You to Us, in which You own the Copyright.

    “Copyright” means all rights protecting works of authorship, including copyright, moral and neighboring rights, as appropriate, for the full term of their existence.

    “Material” means the software or documentation made available by Us to third parties. When this Agreement covers more than one software project, the Material means the software or documentation to which the Contribution was Submitted. After You Submit the Contribution, it may be included in the Material.

    “Submit” means any act by which a Contribution is transferred to Us by You by means of tangible or intangible media, including but not limited to electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, Us, but excluding any transfer that is conspicuously marked or otherwise designated in writing by You as “Not a Contribution.”

    “Documentation” means any non-software portion of a Contribution.

    2. License grant

    Subject to the terms and conditions of this Agreement, You hereby grant to Us a worldwide, royalty-free, Exclusive, perpetual and irrevocable (except as stated in Section 8.2) license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, under the Copyright covering the Contribution to use the Contribution by all means, including, but not limited to:

    2.2 Moral rights

    Moral Rights remain unaffected to the extent they are recognized and not waivable by applicable law. Notwithstanding, You may add your name to the attribution mechanism customary used in the Materials you Contribute to, such as the header of the source code files of Your Contribution, and We will respect this attribution when using Your Contribution.

    Upon such grant of rights to Us, We immediately grant to You a worldwide, royalty-free, non-exclusive, perpetual and irrevocable license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, under the Copyright covering the Contribution to use the Contribution by all means, including, but not limited to:

    This license back is limited to the Contribution and does not provide any rights to the Material.

    3. Patents

    3.1 Patent license

    Subject to the terms and conditions of this Agreement You hereby grant to Us and to recipients of Materials distributed by Us a worldwide, royalty-free, non-exclusive, perpetual and irrevocable (except as stated in Section 3.2) patent license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, to make, have made, use, sell, offer for sale, import and otherwise transfer the Contribution and the Contribution in combination with any Material (and portions of such combination). This license applies to all patents owned or controlled by You, whether already acquired or hereafter acquired, that would be infringed by making, having made, using, selling, offering for sale, importing or otherwise transferring of Your Contribution(s) alone or by combination of Your Contribution(s) with any Material.

    3.2 Revocation of patent license

    You reserve the right to revoke the patent license stated in section 3.1 if We make any infringement claim that is targeted at your Contribution and not asserted for a Defensive Purpose. An assertion of claims of the Patents shall be considered for a “Defensive Purpose” if the claims are asserted against an entity that has filed, maintained, threatened, or voluntarily participated in a patent infringement lawsuit against Us or any of Our licensees.

    4. License obligations by Us

    We agree to (sub)license the Contribution or any Materials containing, based on or derived from your Contribution under the terms of any licenses the Free Software Foundation classifies as Free Software License and which are approved by the Open Source Initiative as Open Source licenses.

    More specifically and in strict accordance with the above paragraph, we agree to (sub)license the Contribution or any Materials containing, based on or derived from the Contribution only in accordance with our licensing policy available at: http://www.apache.org/licenses/LICENSE-2.0.

    In addition, We may use the following licenses for Documentation in the Contribution: GFDL-1.2 (including any right to adopt any future version of a license).

    We agree to license patents owned or controlled by You only to the extent necessary to (sub)license Your Contribution(s) and the combination of Your Contribution(s) with the Material under the terms of any licenses the Free Software Foundation classifies as Free Software licenses and which are approved by the Open Source Initiative as Open Source licenses..

    5. Disclaimer

    THE CONTRIBUTION IS PROVIDED “AS IS”. MORE PARTICULARLY, ALL EXPRESS OR IMPLIED WARRANTIES INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTY OF SATISFACTORY QUALITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT ARE EXPRESSLY DISCLAIMED BY YOU TO US AND BY US TO YOU. TO THE EXTENT THAT ANY SUCH WARRANTIES CANNOT BE DISCLAIMED, SUCH WARRANTY IS LIMITED IN DURATION AND EXTENT TO THE MINIMUM PERIOD AND EXTENT PERMITTED BY LAW.

    6. Consequential damage waiver

    TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT WILL YOU OR WE BE LIABLE FOR ANY LOSS OF PROFITS, LOSS OF ANTICIPATED SAVINGS, LOSS OF DATA, INDIRECT, SPECIAL, INCIDENTAL, CONSEQUENTIAL AND EXEMPLARY DAMAGES ARISING OUT OF THIS AGREEMENT REGARDLESS OF THE LEGAL OR EQUITABLE THEORY (CONTRACT, TORT OR OTHERWISE) UPON WHICH THE CLAIM IS BASED.

    7. Approximation of disclaimer and damage waiver

    IF THE DISCLAIMER AND DAMAGE WAIVER MENTIONED IN SECTION 5. AND SECTION 6. CANNOT BE GIVEN LEGAL EFFECT UNDER APPLICABLE LOCAL LAW, REVIEWING COURTS SHALL APPLY LOCAL LAW THAT MOST CLOSELY APPROXIMATES AN ABSOLUTE WAIVER OF ALL CIVIL OR CONTRACTUAL LIABILITY IN CONNECTION WITH THE CONTRIBUTION.

    8. Term

    8.1 This Agreement shall come into effect upon Your acceptance of the terms and conditions.

    8.2 This Agreement shall apply for the term of the copyright and patents licensed here. However, You shall have the right to terminate the Agreement if We do not fulfill the obligations as set forth in Section 4. Such termination must be made in writing.

    8.3 In the event of a termination of this Agreement Sections 5, 6, 7, 8 and 9 shall survive such termination and shall remain in full force thereafter. For the avoidance of doubt, Free and Open Source Software (sub)licenses that have already been granted for Contributions at the date of the termination shall remain in full force after the termination of this Agreement.

    9 Miscellaneous

    9.1 This Agreement and all disputes, claims, actions, suits or other proceedings arising out of this agreement or relating in any way to it shall be governed by the laws of China excluding its private international law provisions.

    9.2 This Agreement sets out the entire agreement between You and Us for Your Contributions to Us and overrides all other agreements or understandings.

    9.3 In case of Your death, this agreement shall continue with Your heirs. In case of more than one heir, all heirs must exercise their rights through a commonly authorized person.

    9.4 If any provision of this Agreement is found void and unenforceable, such provision will be replaced to the extent possible with a provision that comes closest to the meaning of the original provision and that is enforceable. The terms and conditions set forth in this Agreement shall apply notwithstanding any failure of essential purpose of this Agreement or any limited remedy to the maximum extent possible under law.

    9.5 You agree to notify Us of any facts or circumstances of which you become aware that would make this Agreement inaccurate in any respect.

    12 -

    HugeGraph Docs

    Quickstart

    Config

    API

    Guides

    Query Language

    Performance

    ChangeLogs

    +

    GitHub will automatically update the Pull Request after we push it, just wait for code review.

    9.2 - Subscribe Mailing Lists

    It is highly recommended to subscribe to the development mailing list to keep up-to-date with the community.

    In the process of using HugeGraph, if you have any questions or ideas, suggestions, you can participate in the HugeGraph community building through the Apache mailing list. Sending a subscription email is also very simple, the steps are as follows:

    1. Email dev-subscribe@hugegraph.apache.org with your own email address, subject and content are arbitrary.

    2. Receive confirmation email and reply. After completing step 1, you will receive a confirmation email from dev-help@hugegraph.apache.org (if not received, please confirm whether the email is automatically classified as spam, promotion email, subscription email, etc.) . Then reply directly to the email, or click on the link in the email to reply quickly, the subject and content are arbitrary.

    3. Receive a welcome email. After completing the above steps, you will receive a welcome email with the subject WELCOME to dev@hugegraph.apache.org, and you have successfully subscribed to the Apache HugeGraph mailing list.

    Unsubscribe Mailing Lists

    If you do not need to know what’s going on with HugeGraph, you can unsubscribe from the mailing list.

    Unsubscribe from the mailing list steps are as follows:

    1. Email dev-unsubscribe@hugegraph.apache.org with your subscribed email address, subject and content are arbitrary.

    2. Receive confirmation email and reply. After completing step 1, you will receive a confirmation email from dev-help@hugegraph.apache.org (if not received, please confirm whether the email is automatically classified as spam, promotion email, subscription email, etc.) . Then reply directly to the email, or click on the link in the email to reply quickly, the subject and content are arbitrary.

    3. Receive a goodbye email. After completing the above steps, you will receive a goodbye email with the subject GOODBYE from dev@hugegraph.apache.org, and you have successfully unsubscribed to the Apache HugeGraph mailing list, and you will not receive emails from dev@hugegraph.apache.org.

    10 - CHANGELOGS

    10.1 - HugeGraph 0.12 Release Notes

    API & Client

    接口更新

    其它修改

    Core & Server

    功能更新

    BUG修复

    配置项修改:

    其它修改

    Loader

    Tools

    11 -

    Contributor Agreement

    Individual Contributor exclusive License Agreement

    (including the TRADITIONAL PATENT LICENSE OPTION)

    Thank you for your interest in contributing to HugeGraph’s all projects (“We” or “Us”).

    The purpose of this contributor agreement (“Agreement”) is to clarify and document the rights granted by contributors to Us. To make this document effective, please follow the comment of GitHub CLA-Assistant when submitting a new pull request.

    How to use this Contributor Agreement

    If You are an employee and have created the Contribution as part of your employment, You need to have Your employer approve this Agreement or sign the Entity version of this document. If You do not own the Copyright in the entire work of authorship, any other author of the Contribution should also sign this – in any event, please contact Us at hugegraph@googlegroups.com

    1. Definitions

    “You” means the individual Copyright owner who Submits a Contribution to Us.

    “Contribution” means any original work of authorship, including any original modifications or additions to an existing work of authorship, Submitted by You to Us, in which You own the Copyright.

    “Copyright” means all rights protecting works of authorship, including copyright, moral and neighboring rights, as appropriate, for the full term of their existence.

    “Material” means the software or documentation made available by Us to third parties. When this Agreement covers more than one software project, the Material means the software or documentation to which the Contribution was Submitted. After You Submit the Contribution, it may be included in the Material.

    “Submit” means any act by which a Contribution is transferred to Us by You by means of tangible or intangible media, including but not limited to electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, Us, but excluding any transfer that is conspicuously marked or otherwise designated in writing by You as “Not a Contribution.”

    “Documentation” means any non-software portion of a Contribution.

    2. License grant

    Subject to the terms and conditions of this Agreement, You hereby grant to Us a worldwide, royalty-free, Exclusive, perpetual and irrevocable (except as stated in Section 8.2) license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, under the Copyright covering the Contribution to use the Contribution by all means, including, but not limited to:

    2.2 Moral rights

    Moral Rights remain unaffected to the extent they are recognized and not waivable by applicable law. Notwithstanding, You may add your name to the attribution mechanism customary used in the Materials you Contribute to, such as the header of the source code files of Your Contribution, and We will respect this attribution when using Your Contribution.

    Upon such grant of rights to Us, We immediately grant to You a worldwide, royalty-free, non-exclusive, perpetual and irrevocable license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, under the Copyright covering the Contribution to use the Contribution by all means, including, but not limited to:

    This license back is limited to the Contribution and does not provide any rights to the Material.

    3. Patents

    3.1 Patent license

    Subject to the terms and conditions of this Agreement You hereby grant to Us and to recipients of Materials distributed by Us a worldwide, royalty-free, non-exclusive, perpetual and irrevocable (except as stated in Section 3.2) patent license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, to make, have made, use, sell, offer for sale, import and otherwise transfer the Contribution and the Contribution in combination with any Material (and portions of such combination). This license applies to all patents owned or controlled by You, whether already acquired or hereafter acquired, that would be infringed by making, having made, using, selling, offering for sale, importing or otherwise transferring of Your Contribution(s) alone or by combination of Your Contribution(s) with any Material.

    3.2 Revocation of patent license

    You reserve the right to revoke the patent license stated in section 3.1 if We make any infringement claim that is targeted at your Contribution and not asserted for a Defensive Purpose. An assertion of claims of the Patents shall be considered for a “Defensive Purpose” if the claims are asserted against an entity that has filed, maintained, threatened, or voluntarily participated in a patent infringement lawsuit against Us or any of Our licensees.

    4. License obligations by Us

    We agree to (sub)license the Contribution or any Materials containing, based on or derived from your Contribution under the terms of any licenses the Free Software Foundation classifies as Free Software License and which are approved by the Open Source Initiative as Open Source licenses.

    More specifically and in strict accordance with the above paragraph, we agree to (sub)license the Contribution or any Materials containing, based on or derived from the Contribution only in accordance with our licensing policy available at: http://www.apache.org/licenses/LICENSE-2.0.

    In addition, We may use the following licenses for Documentation in the Contribution: GFDL-1.2 (including any right to adopt any future version of a license).

    We agree to license patents owned or controlled by You only to the extent necessary to (sub)license Your Contribution(s) and the combination of Your Contribution(s) with the Material under the terms of any licenses the Free Software Foundation classifies as Free Software licenses and which are approved by the Open Source Initiative as Open Source licenses..

    5. Disclaimer

    THE CONTRIBUTION IS PROVIDED “AS IS”. MORE PARTICULARLY, ALL EXPRESS OR IMPLIED WARRANTIES INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTY OF SATISFACTORY QUALITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT ARE EXPRESSLY DISCLAIMED BY YOU TO US AND BY US TO YOU. TO THE EXTENT THAT ANY SUCH WARRANTIES CANNOT BE DISCLAIMED, SUCH WARRANTY IS LIMITED IN DURATION AND EXTENT TO THE MINIMUM PERIOD AND EXTENT PERMITTED BY LAW.

    6. Consequential damage waiver

    TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT WILL YOU OR WE BE LIABLE FOR ANY LOSS OF PROFITS, LOSS OF ANTICIPATED SAVINGS, LOSS OF DATA, INDIRECT, SPECIAL, INCIDENTAL, CONSEQUENTIAL AND EXEMPLARY DAMAGES ARISING OUT OF THIS AGREEMENT REGARDLESS OF THE LEGAL OR EQUITABLE THEORY (CONTRACT, TORT OR OTHERWISE) UPON WHICH THE CLAIM IS BASED.

    7. Approximation of disclaimer and damage waiver

    IF THE DISCLAIMER AND DAMAGE WAIVER MENTIONED IN SECTION 5. AND SECTION 6. CANNOT BE GIVEN LEGAL EFFECT UNDER APPLICABLE LOCAL LAW, REVIEWING COURTS SHALL APPLY LOCAL LAW THAT MOST CLOSELY APPROXIMATES AN ABSOLUTE WAIVER OF ALL CIVIL OR CONTRACTUAL LIABILITY IN CONNECTION WITH THE CONTRIBUTION.

    8. Term

    8.1 This Agreement shall come into effect upon Your acceptance of the terms and conditions.

    8.2 This Agreement shall apply for the term of the copyright and patents licensed here. However, You shall have the right to terminate the Agreement if We do not fulfill the obligations as set forth in Section 4. Such termination must be made in writing.

    8.3 In the event of a termination of this Agreement Sections 5, 6, 7, 8 and 9 shall survive such termination and shall remain in full force thereafter. For the avoidance of doubt, Free and Open Source Software (sub)licenses that have already been granted for Contributions at the date of the termination shall remain in full force after the termination of this Agreement.

    9 Miscellaneous

    9.1 This Agreement and all disputes, claims, actions, suits or other proceedings arising out of this agreement or relating in any way to it shall be governed by the laws of China excluding its private international law provisions.

    9.2 This Agreement sets out the entire agreement between You and Us for Your Contributions to Us and overrides all other agreements or understandings.

    9.3 In case of Your death, this agreement shall continue with Your heirs. In case of more than one heir, all heirs must exercise their rights through a commonly authorized person.

    9.4 If any provision of this Agreement is found void and unenforceable, such provision will be replaced to the extent possible with a provision that comes closest to the meaning of the original provision and that is enforceable. The terms and conditions set forth in this Agreement shall apply notwithstanding any failure of essential purpose of this Agreement or any limited remedy to the maximum extent possible under law.

    9.5 You agree to notify Us of any facts or circumstances of which you become aware that would make this Agreement inaccurate in any respect.

    12 -

    HugeGraph Docs

    Quickstart

    Config

    API

    Guides

    Query Language

    Performance

    ChangeLogs

    diff --git a/docs/changelog/_print/index.html b/docs/changelog/_print/index.html index f747e9923..3ee623a72 100644 --- a/docs/changelog/_print/index.html +++ b/docs/changelog/_print/index.html @@ -1,6 +1,6 @@ CHANGELOGS | HugeGraph

    This is the multi-page printable view of this section. -Click here to print.

    Return to the regular view of this page.

    CHANGELOGS

    1 - HugeGraph 0.12 Release Notes

    API & Client

    接口更新

    • 支持 https + auth 模式连接图服务 (hugegraph-client #109 #110)
    • 统一 kout/kneighbor 等 OLTP 接口的参数命名及默认值(hugegraph-client #122 #123)
    • 支持 RESTful 接口利用 P.textcontains() 进行属性全文检索(hugegraph #1312)
    • 增加 graph_read_mode API 接口,以切换 OLTP、OLAP 读模式(hugegraph #1332)
    • 支持 list/set 类型的聚合属性 aggregate property(hugegraph #1332)
    • 权限接口增加 METRICS 资源类型(hugegraph #1355、hugegraph-client #114)
    • 权限接口增加 SCHEMA 资源类型(hugegraph #1362、hugegraph-client #117)
    • 增加手动 compact API 接口,支持 rocksdb/cassandra/hbase 后端(hugegraph #1378)
    • 权限接口增加 login/logout API,支持颁发或回收 Token(hugegraph #1500、hugegraph-client #125)
    • 权限接口增加 project API(hugegraph #1504、hugegraph-client #127)
    • 增加 OLAP 回写接口,支持 cassandra/rocksdb 后端(hugegraph #1506、hugegraph-client #129)
    • 增加返回一个图的所有 Schema 的 API 接口(hugegraph #1567、hugegraph-client #134)
    • 变更 property key 创建与更新 API 的 HTTP 返回码为 202(hugegraph #1584)
    • 增强 Text.contains() 支持3种格式:“word”、"(word)"、"(word1|word2|word3)"(hugegraph #1652)
    • 统一了属性中特殊字符的行为(hugegraph #1670 #1684)
    • 支持动态创建图实例、克隆图实例、删除图实例(hugegraph-client #135)

    其它修改

    • 修复在恢复 index label 时 IndexLabelV56 id 丢失的问题(hugegraph-client #118)
    • 为 Edge 类增加 name() 方法(hugegraph-client #121)

    Core & Server

    功能更新

    • 支持动态创建图实例(hugegraph #1065)
    • 支持通过 Gremlin 调用 OLTP 算法(hugegraph #1289)
    • 支持多集群使用同一个图权限服务,以共享权限信息(hugegraph #1350)
    • 支持跨多节点的 Cache 缓存同步(hugegraph #1357)
    • 支持 OLTP 算法使用原生集合以降低 GC 压力提升性能(hugegraph #1409)
    • 支持对新增的 Raft 节点打快照或恢复快照(hugegraph #1439)
    • 支持对集合属性建立二级索引 Secondary Index(hugegraph #1474)
    • 支持审计日志,及其压缩、限速等功能(hugegraph #1492 #1493)
    • 支持 OLTP 算法使用高性能并行无锁原生集合以提升性能(hugegraph #1552)

    BUG修复

    • 修复带权最短路径算法(weighted shortest path)NPE问题 (hugegraph #1250)
    • 增加 Raft 相关的安全操作白名单(hugegraph #1257)
    • 修复 RocksDB 实例未正确关闭的问题(hugegraph #1264)
    • 在清空数据 truncate 操作之后,显示的发起写快照 Raft Snapshot(hugegraph #1275)
    • 修复 Raft Leader 在收到 Follower 转发请求时未更新缓存的问题(hugegraph #1279)
    • 修复带权最短路径算法(weighted shortest path)结果不稳定的问题(hugegraph #1280)
    • 修复 rays 算法 limit 参数不生效问题(hugegraph #1284)
    • 修复 neighborrank 算法 capacity 参数未检查的问题(hugegraph #1290)
    • 修复 PostgreSQL 因为不存在与用户同名的数据库而初始化失败的问题(hugegraph #1293)
    • 修复 HBase 后端当启用 Kerberos 时初始化失败的问题(hugegraph #1294)
    • 修复 HBase/RocksDB 后端 shard 结束判断错误问题(hugegraph #1306)
    • 修复带权最短路径算法(weighted shortest path)未检查目标顶点存在的问题(hugegraph #1307)
    • 修复 personalrank/neighborrank 算法中非 String 类型 id 的问题(hugegraph #1310)
    • 检查必须是 master 节点才允许调度 gremlin job(hugegraph #1314)
    • 修复 g.V().hasLabel().limit(n) 因为索引覆盖导致的部分结果不准确问题(hugegraph #1316)
    • 修复 jaccardsimilarity 算法当并集为空时报 NaN 错误的问题(hugegraph #1324)
    • 修复 Raft Follower 节点操作 Schema 多节点之间数据不同步问题(hugegraph #1325)
    • 修复因为 tx 未关闭导致的 TTL 不生效问题(hugegraph #1330)
    • 修复 gremlin job 的执行结果大于 Cassandra 限制但小于任务限制时的异常处理(hugegraph #1334)
    • 检查权限接口 auth-delete 和 role-get API 操作时图必须存在(hugegraph #1338)
    • 修复异步任务结果中包含 path/tree 时系列化不正常的问题(hugegraph #1351)
    • 修复初始化 admin 用户时的 NPE 问题(hugegraph #1360)
    • 修复异步任务原子性操作问题,确保 update/get fields 及 re-schedule 的原子性(hugegraph #1361)
    • 修复权限 NONE 资源类型的问题(hugegraph #1362)
    • 修复启用权限后,truncate 操作报错 SecurityException 及管理员信息丢失问题(hugegraph #1365)
    • 修复启用权限后,解析数据忽略了权限异常的问题(hugegraph #1380)
    • 修复 AuthManager 在初始化时会尝试连接其它节点的问题(hugegraph #1381)
    • 修复特定的 shard 信息导致 base64 解码错误的问题(hugegraph #1383)
    • 修复启用权限后,使用 consistent-hash LB 在校验权限时,creator 为空的问题(hugegraph #1385)
    • 改进权限中 VAR 资源不再依赖于 VERTEX 资源(hugegraph #1386)
    • 规范启用权限后,Schema 操作仅依赖具体的资源(hugegraph #1387)
    • 规范启用权限后,部分操作由依赖 STATUS 资源改为依赖 ANY 资源(hugegraph #1391)
    • 规范启用权限后,禁止初始化管理员密码为空(hugegraph #1400)
    • 检查创建用户时 username/password 不允许为空(hugegraph #1402)
    • 修复更新 Label 时,PrimaryKey 或 SortKey 被设置为可空属性的问题(hugegraph #1406)
    • 修复 ScyllaDB 丢失分页结果问题(hugegraph #1407)
    • 修复带权最短路径算法(weighted shortest path)权重属性强制转换为 double 的问题(hugegraph #1432)
    • 统一 OLTP 算法中的 degree 参数命名(hugegraph #1433)
    • 修复 fusiformsimilarity 算法当 similars 为空的时候返回所有的顶点问题(hugegraph #1434)
    • 改进 paths 算法,当起始点与目标点相同时应该返回空路径(hugegraph #1435)
    • 修改 kout/kneighbor 的 limit 参数默认值 10 为 10000000(hugegraph #1436)
    • 修复分页信息中的 ‘+’ 被 URL 编码为空格的问题(hugegraph #1437)
    • 改进边更新接口的错误提示信息(hugegraph #1443)
    • 修复 kout 算法 degree 未在所有 label 范围生效的问题(hugegraph #1459)
    • 改进 kneighbor/kout 算法,起始点不允许出现在结果集中(hugegraph #1459 #1463)
    • 统一 kout/kneighbor 的 Get 和 Post 版本行为(hugegraph #1470)
    • 改进创建边时顶点类型不匹配的错误提示信息(hugegraph #1477)
    • 修复 Range Index 的残留索引问题(hugegraph #1498)
    • 修复权限操作未失效缓存的问题(hugegraph #1528)
    • 修复 sameneighbor 的 limit 参数默认值 10 为 10000000(hugegraph #1530)
    • 修复 clear API 不应该所有后端都调用 create snapshot 的问题(hugegraph #1532)
    • 修复当 loading 模式时创建 Index Label 阻塞问题(hugegraph #1548)
    • 修复增加图到 project 或从 project 移除图的问题(hugegraph #1562)
    • 改进权限操作的一些错误提示信息(hugegraph #1563)
    • 支持浮点属性设置为 Infinity/NaN 的值(hugegraph #1578)
    • 修复 Raft 启用 safe_read 时的 quorum read 问题(hugegraph #1618)
    • 修复 token 过期时间配置的单位问题(hugegraph #1625)
    • 修复 MySQL Statement 资源泄露问题(hugegraph #1627)
    • 修复竞争条件下 Schema.getIndexLabel 获取不到数据的问题(hugegraph #1629)
    • 修复 HugeVertex4Insert 无法系列化问题(hugegraph #1630)
    • 修复 MySQL count Statement 未关闭问题(hugegraph #1640)
    • 修复当删除 Index Label 异常时,导致状态不同步问题(hugegraph #1642)
    • 修复 MySQL 执行 gremlin timeout 导致的 statement 未关闭问题(hugegraph #1643)
    • 改进 Search Index 以兼容特殊 Unicode 字符:\u0000 to \u0003(hugegraph #1659)
    • 修复 #1659 引入的 Char 未转化为 String 的问题(hugegraph #1664)
    • 修复 has() + within() 查询时结果异常问题(hugegraph #1680)
    • 升级 Log4j 版本到 2.17 以修复安全漏洞(hugegraph #1686 #1698 #1702)
    • 修复 HBase 后端 shard scan 中 startkey 包含空串时 NPE 问题(hugegraph #1691)
    • 修复 paths 算法在深层环路遍历时性能下降问题 (hugegraph #1694)
    • 改进 personalrank 算法的参数默认值及错误检查(hugegraph #1695)
    • 修复 RESTful 接口 P.within 条件不生效问题(hugegraph #1704)
    • 修复启用权限时无法动态创建图的问题(hugegraph #1708)

    配置项修改:

    • 共享 SSL 相关配置项命名(hugegraph #1260)
    • 支持 RocksDB 配置项 rocksdb.level_compaction_dynamic_level_bytes(hugegraph #1262)
    • 去除 RESFful Server 服务协议配置项 restserver.protocol,自动提取 URL 中的 Schema(hugegraph #1272)
    • 增加 PostgreSQL 配置项 jdbc.postgresql.connect_database(hugegraph #1293)
    • 增加针对顶点主键是否编码的配置项 vertex.encode_primary_key_number(hugegraph #1323)
    • 增加针对聚合查询是否启用索引优化的配置项 query.optimize_aggregate_by_index(hugegraph #1549)
    • 修改 cache_type 的默认值 l1 为 l2(hugegraph #1681)
    • 增加 JDBC 强制重连配置项 jdbc.forced_auto_reconnect(hugegraph #1710)

    其它修改

    • 增加默认的 SSL Certificate 文件(hugegraph #1254)
    • OLTP 并行请求共享线程池,而非每个请求使用单独的线程池(hugegraph #1258)
    • 修复 Example 的问题(hugegraph #1308)
    • 使用 jraft 版本 1.3.5(hugegraph #1313)
    • 如果启用了 Raft 模式时,关闭 RocksDB 的 WAL(hugegraph #1318)
    • 使用 TarLz4Util 来提升快照 Snapshot 压缩的性能(hugegraph #1336)
    • 升级存储的版本号(store version),因为 property key 增加了 read frequency(hugegraph #1341)
    • 顶点/边 vertex/edge 的 Get API 使用 queryVertex/queryEdge 方法来替代 iterator 方法(hugegraph #1345)
    • 支持 BFS 优化的多度查询(hugegraph #1359)
    • 改进 RocksDB deleteRange() 带来的查询性能问题(hugegraph #1375)
    • 修复 travis-ci cannot find symbol Namifiable 问题(hugegraph #1376)
    • 确保 RocksDB 快照的磁盘与 data path 指定的一致(hugegraph #1392)
    • 修复 MacOS 空闲内存 free_memory 计算不准确问题(hugegraph #1396)
    • 增加 Raft onBusy 回调来配合限速(hugegraph #1401)
    • 升级 netty-all 版本 4.1.13.Final 到 4.1.42.Final(hugegraph #1403)
    • 支持 TaskScheduler 暂停当设置为 loading 模式时(hugegraph #1414)
    • 修复 raft-tools 脚本的问题(hugegraph #1416)
    • 修复 license params 问题(hugegraph #1420)
    • 提升写权限日志的性能,通过 batch flush & async write 方式改进(hugegraph #1448)
    • 增加 MySQL 连接 URL 的日志记录(hugegraph #1451)
    • 提升用户信息校验性能(hugegraph# 1460)
    • 修复 TTL 因为起始时间问题导致的错误(hugegraph #1478)
    • 支持日志配置的热加载及对审计日志的压缩(hugegraph #1492)
    • 支持针对用户级别的审计日志的限速(hugegraph #1493)
    • 缓存 RamCache 支持用户自定义的过期时间(hugegraph #1494)
    • 在 auth client 端缓存 login role 以避免重复的 RPC 调用(hugegraph #1507)
    • 修复 IdSet.contains() 未复写 AbstractCollection.contains() 问题(hugegraph #1511)
    • 修复当 commitPartOfEdgeDeletions() 失败时,未回滚 rollback 的问题(hugegraph #1513)
    • 提升 Cache metrics 性能(hugegraph #1515)
    • 当发生 license 操作错误时,增加打印异常日志(hugegraph #1522)
    • 改进 SimilarsMap 实现(hugegraph #1523)
    • 使用 tokenless 方式来更新 coverage(hugegraph #1529)
    • 改进 project update 接口的代码(hugegraph #1537)
    • 允许从 option() 访问 GRAPH_STORE(hugegraph #1546)
    • 优化 kout/kneighbor 的 count 查询以避免拷贝集合(hugegraph #1550)
    • 优化 shortestpath 遍历方式,以数据量少的一端优先遍历(hugegraph #1569)
    • 完善 rocksdb.data_disks 配置项的 allowed keys 提示信息(hugegraph #1585)
    • 为 number id 优化 OLTP 遍历中的 id2code 方法性能(hugegraph #1623)
    • 优化 HugeElement.getProperties() 返回 Collection<Property>(hugegraph #1624)
    • 增加 APACHE PROPOSAL 文件(hugegraph #1644)
    • 改进 close tx 的流程(hugegraph #1655)
    • 当 reset() 时为 MySQL close 捕获所有类型异常(hugegraph #1661)
    • 改进 OLAP property 模块代码(hugegraph #1675)
    • 改进查询模块的执行性能(hugegraph #1711)

    Loader

    • 支持导入 Parquet 格式文件(hugegraph-loader #174)
    • 支持 HDFS Kerberos 权限验证(hugegraph-loader #176)
    • 支持 HTTPS 协议连接到服务端导入数据(hugegraph-loader #183)
    • 修复 trust store file 路径问题(hugegraph-loader #186)
    • 处理 loading mode 重置的异常(hugegraph-loader #187)
    • 增加在插入数据时对非空属性的检查(hugegraph-loader #190)
    • 修复客户端与服务端时区不同导致的时间判断问题(hugegraph-loader #192)
    • 优化数据解析性能(hugegraph-loader #194)
    • 当用户指定了文件头时,检查其必须不为空(hugegraph-loader #195)
    • 修复示例程序中 MySQL struct.json 格式问题(hugegraph-loader #198)
    • 修复顶点边导入速度不精确的问题(hugegraph-loader #200 #205)
    • 当导入启用 check-vertex 时,确保先导入顶点再导入边(hugegraph-loader #206)
    • 修复边 Json 数据导入格式不统一时数组溢出的问题(hugegraph-loader #211)
    • 修复因边 mapping 文件不存在导致的 NPE 问题(hugegraph-loader #213)
    • 修复读取时间可能出现负数的问题(hugegraph-loader #215)
    • 改进目录文件的日志打印(hugegraph-loader #223)
    • 改进 loader 的的 Schema 处理流程(hugegraph-loader #230)

    Tools

    • 支持 HTTPS 协议(hugegraph-tools #71)
    • 移除 –protocol 参数,直接从URL中自动提取(hugegraph-tools #72)
    • 支持将数据 dump 到 HDFS 文件系统(hugegraph-tools #73)
    • 修复 trust store file 路径问题(hugegraph-tools #75)
    • 支持权限信息的备份恢复(hugegraph-tools #76)
    • 支持无参数的 Printer 打印(hugegraph-tools #79)
    • 修复 MacOS free_memory 计算问题(hugegraph-tools #82)
    • 支持备份恢复时指定线程数hugegraph-tools #83)
    • 支持动态创建图、克隆图、删除图等命令(hugegraph-tools #95)
    +Click here to print.

    Return to the regular view of this page.

    CHANGELOGS

    1 - HugeGraph 0.12 Release Notes

    API & Client

    接口更新

    其它修改

    Core & Server

    功能更新

    BUG修复

    配置项修改:

    其它修改

    Loader

    Tools

    diff --git a/docs/changelog/hugegraph-0.12.0-release-notes/index.html b/docs/changelog/hugegraph-0.12.0-release-notes/index.html index 016ae9c39..bf2b76364 100644 --- a/docs/changelog/hugegraph-0.12.0-release-notes/index.html +++ b/docs/changelog/hugegraph-0.12.0-release-notes/index.html @@ -8,7 +8,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph 0.12 Release Notes

    API & Client

    接口更新

    其它修改

    Core & Server

    功能更新

    BUG修复

    配置项修改:

    其它修改

    Loader

    Tools


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    HugeGraph 0.12 Release Notes

    API & Client

    接口更新

    其它修改

    Core & Server

    功能更新

    BUG修复

    配置项修改:

    其它修改

    Loader

    Tools


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/changelog/index.html b/docs/changelog/index.html index a31dc60c3..1b6adaef0 100644 --- a/docs/changelog/index.html +++ b/docs/changelog/index.html @@ -4,7 +4,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    CHANGELOGS


    + Print entire section

    CHANGELOGS


    diff --git a/docs/cla/index.html b/docs/cla/index.html index c40b4f8f8..ead0b768b 100644 --- a/docs/cla/index.html +++ b/docs/cla/index.html @@ -13,7 +13,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    Contributor Agreement

    Individual Contributor exclusive License Agreement

    (including the TRADITIONAL PATENT LICENSE OPTION)

    Thank you for your interest in contributing to HugeGraph’s all projects (“We” or “Us”).

    The purpose of this contributor agreement (“Agreement”) is to clarify and document the rights granted by contributors to Us. To make this document effective, please follow the comment of GitHub CLA-Assistant when submitting a new pull request.

    How to use this Contributor Agreement

    If You are an employee and have created the Contribution as part of your employment, You need to have Your employer approve this Agreement or sign the Entity version of this document. If You do not own the Copyright in the entire work of authorship, any other author of the Contribution should also sign this – in any event, please contact Us at hugegraph@googlegroups.com

    1. Definitions

    “You” means the individual Copyright owner who Submits a Contribution to Us.

    “Contribution” means any original work of authorship, including any original modifications or additions to an existing work of authorship, Submitted by You to Us, in which You own the Copyright.

    “Copyright” means all rights protecting works of authorship, including copyright, moral and neighboring rights, as appropriate, for the full term of their existence.

    “Material” means the software or documentation made available by Us to third parties. When this Agreement covers more than one software project, the Material means the software or documentation to which the Contribution was Submitted. After You Submit the Contribution, it may be included in the Material.

    “Submit” means any act by which a Contribution is transferred to Us by You by means of tangible or intangible media, including but not limited to electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, Us, but excluding any transfer that is conspicuously marked or otherwise designated in writing by You as “Not a Contribution.”

    “Documentation” means any non-software portion of a Contribution.

    2. License grant

    Subject to the terms and conditions of this Agreement, You hereby grant to Us a worldwide, royalty-free, Exclusive, perpetual and irrevocable (except as stated in Section 8.2) license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, under the Copyright covering the Contribution to use the Contribution by all means, including, but not limited to:

    2.2 Moral rights

    Moral Rights remain unaffected to the extent they are recognized and not waivable by applicable law. Notwithstanding, You may add your name to the attribution mechanism customary used in the Materials you Contribute to, such as the header of the source code files of Your Contribution, and We will respect this attribution when using Your Contribution.

    Upon such grant of rights to Us, We immediately grant to You a worldwide, royalty-free, non-exclusive, perpetual and irrevocable license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, under the Copyright covering the Contribution to use the Contribution by all means, including, but not limited to:

    This license back is limited to the Contribution and does not provide any rights to the Material.

    3. Patents

    3.1 Patent license

    Subject to the terms and conditions of this Agreement You hereby grant to Us and to recipients of Materials distributed by Us a worldwide, royalty-free, non-exclusive, perpetual and irrevocable (except as stated in Section 3.2) patent license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, to make, have made, use, sell, offer for sale, import and otherwise transfer the Contribution and the Contribution in combination with any Material (and portions of such combination). This license applies to all patents owned or controlled by You, whether already acquired or hereafter acquired, that would be infringed by making, having made, using, selling, offering for sale, importing or otherwise transferring of Your Contribution(s) alone or by combination of Your Contribution(s) with any Material.

    3.2 Revocation of patent license

    You reserve the right to revoke the patent license stated in section 3.1 if We make any infringement claim that is targeted at your Contribution and not asserted for a Defensive Purpose. An assertion of claims of the Patents shall be considered for a “Defensive Purpose” if the claims are asserted against an entity that has filed, maintained, threatened, or voluntarily participated in a patent infringement lawsuit against Us or any of Our licensees.

    4. License obligations by Us

    We agree to (sub)license the Contribution or any Materials containing, based on or derived from your Contribution under the terms of any licenses the Free Software Foundation classifies as Free Software License and which are approved by the Open Source Initiative as Open Source licenses.

    More specifically and in strict accordance with the above paragraph, we agree to (sub)license the Contribution or any Materials containing, based on or derived from the Contribution only in accordance with our licensing policy available at: http://www.apache.org/licenses/LICENSE-2.0.

    In addition, We may use the following licenses for Documentation in the Contribution: GFDL-1.2 (including any right to adopt any future version of a license).

    We agree to license patents owned or controlled by You only to the extent necessary to (sub)license Your Contribution(s) and the combination of Your Contribution(s) with the Material under the terms of any licenses the Free Software Foundation classifies as Free Software licenses and which are approved by the Open Source Initiative as Open Source licenses..

    5. Disclaimer

    THE CONTRIBUTION IS PROVIDED “AS IS”. MORE PARTICULARLY, ALL EXPRESS OR IMPLIED WARRANTIES INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTY OF SATISFACTORY QUALITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT ARE EXPRESSLY DISCLAIMED BY YOU TO US AND BY US TO YOU. TO THE EXTENT THAT ANY SUCH WARRANTIES CANNOT BE DISCLAIMED, SUCH WARRANTY IS LIMITED IN DURATION AND EXTENT TO THE MINIMUM PERIOD AND EXTENT PERMITTED BY LAW.

    6. Consequential damage waiver

    TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT WILL YOU OR WE BE LIABLE FOR ANY LOSS OF PROFITS, LOSS OF ANTICIPATED SAVINGS, LOSS OF DATA, INDIRECT, SPECIAL, INCIDENTAL, CONSEQUENTIAL AND EXEMPLARY DAMAGES ARISING OUT OF THIS AGREEMENT REGARDLESS OF THE LEGAL OR EQUITABLE THEORY (CONTRACT, TORT OR OTHERWISE) UPON WHICH THE CLAIM IS BASED.

    7. Approximation of disclaimer and damage waiver

    IF THE DISCLAIMER AND DAMAGE WAIVER MENTIONED IN SECTION 5. AND SECTION 6. CANNOT BE GIVEN LEGAL EFFECT UNDER APPLICABLE LOCAL LAW, REVIEWING COURTS SHALL APPLY LOCAL LAW THAT MOST CLOSELY APPROXIMATES AN ABSOLUTE WAIVER OF ALL CIVIL OR CONTRACTUAL LIABILITY IN CONNECTION WITH THE CONTRIBUTION.

    8. Term

    8.1 This Agreement shall come into effect upon Your acceptance of the terms and conditions.

    8.2 This Agreement shall apply for the term of the copyright and patents licensed here. However, You shall have the right to terminate the Agreement if We do not fulfill the obligations as set forth in Section 4. Such termination must be made in writing.

    8.3 In the event of a termination of this Agreement Sections 5, 6, 7, 8 and 9 shall survive such termination and shall remain in full force thereafter. For the avoidance of doubt, Free and Open Source Software (sub)licenses that have already been granted for Contributions at the date of the termination shall remain in full force after the termination of this Agreement.

    9 Miscellaneous

    9.1 This Agreement and all disputes, claims, actions, suits or other proceedings arising out of this agreement or relating in any way to it shall be governed by the laws of China excluding its private international law provisions.

    9.2 This Agreement sets out the entire agreement between You and Us for Your Contributions to Us and overrides all other agreements or understandings.

    9.3 In case of Your death, this agreement shall continue with Your heirs. In case of more than one heir, all heirs must exercise their rights through a commonly authorized person.

    9.4 If any provision of this Agreement is found void and unenforceable, such provision will be replaced to the extent possible with a provision that comes closest to the meaning of the original provision and that is enforceable. The terms and conditions set forth in this Agreement shall apply notwithstanding any failure of essential purpose of this Agreement or any limited remedy to the maximum extent possible under law.

    9.5 You agree to notify Us of any facts or circumstances of which you become aware that would make this Agreement inaccurate in any respect.


    + Print entire section

    Contributor Agreement

    Individual Contributor exclusive License Agreement

    (including the TRADITIONAL PATENT LICENSE OPTION)

    Thank you for your interest in contributing to HugeGraph’s all projects (“We” or “Us”).

    The purpose of this contributor agreement (“Agreement”) is to clarify and document the rights granted by contributors to Us. To make this document effective, please follow the comment of GitHub CLA-Assistant when submitting a new pull request.

    How to use this Contributor Agreement

    If You are an employee and have created the Contribution as part of your employment, You need to have Your employer approve this Agreement or sign the Entity version of this document. If You do not own the Copyright in the entire work of authorship, any other author of the Contribution should also sign this – in any event, please contact Us at hugegraph@googlegroups.com

    1. Definitions

    “You” means the individual Copyright owner who Submits a Contribution to Us.

    “Contribution” means any original work of authorship, including any original modifications or additions to an existing work of authorship, Submitted by You to Us, in which You own the Copyright.

    “Copyright” means all rights protecting works of authorship, including copyright, moral and neighboring rights, as appropriate, for the full term of their existence.

    “Material” means the software or documentation made available by Us to third parties. When this Agreement covers more than one software project, the Material means the software or documentation to which the Contribution was Submitted. After You Submit the Contribution, it may be included in the Material.

    “Submit” means any act by which a Contribution is transferred to Us by You by means of tangible or intangible media, including but not limited to electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, Us, but excluding any transfer that is conspicuously marked or otherwise designated in writing by You as “Not a Contribution.”

    “Documentation” means any non-software portion of a Contribution.

    2. License grant

    Subject to the terms and conditions of this Agreement, You hereby grant to Us a worldwide, royalty-free, Exclusive, perpetual and irrevocable (except as stated in Section 8.2) license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, under the Copyright covering the Contribution to use the Contribution by all means, including, but not limited to:

    2.2 Moral rights

    Moral Rights remain unaffected to the extent they are recognized and not waivable by applicable law. Notwithstanding, You may add your name to the attribution mechanism customary used in the Materials you Contribute to, such as the header of the source code files of Your Contribution, and We will respect this attribution when using Your Contribution.

    Upon such grant of rights to Us, We immediately grant to You a worldwide, royalty-free, non-exclusive, perpetual and irrevocable license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, under the Copyright covering the Contribution to use the Contribution by all means, including, but not limited to:

    This license back is limited to the Contribution and does not provide any rights to the Material.

    3. Patents

    3.1 Patent license

    Subject to the terms and conditions of this Agreement You hereby grant to Us and to recipients of Materials distributed by Us a worldwide, royalty-free, non-exclusive, perpetual and irrevocable (except as stated in Section 3.2) patent license, with the right to transfer an unlimited number of non-exclusive licenses or to grant sublicenses to third parties, to make, have made, use, sell, offer for sale, import and otherwise transfer the Contribution and the Contribution in combination with any Material (and portions of such combination). This license applies to all patents owned or controlled by You, whether already acquired or hereafter acquired, that would be infringed by making, having made, using, selling, offering for sale, importing or otherwise transferring of Your Contribution(s) alone or by combination of Your Contribution(s) with any Material.

    3.2 Revocation of patent license

    You reserve the right to revoke the patent license stated in section 3.1 if We make any infringement claim that is targeted at your Contribution and not asserted for a Defensive Purpose. An assertion of claims of the Patents shall be considered for a “Defensive Purpose” if the claims are asserted against an entity that has filed, maintained, threatened, or voluntarily participated in a patent infringement lawsuit against Us or any of Our licensees.

    4. License obligations by Us

    We agree to (sub)license the Contribution or any Materials containing, based on or derived from your Contribution under the terms of any licenses the Free Software Foundation classifies as Free Software License and which are approved by the Open Source Initiative as Open Source licenses.

    More specifically and in strict accordance with the above paragraph, we agree to (sub)license the Contribution or any Materials containing, based on or derived from the Contribution only in accordance with our licensing policy available at: http://www.apache.org/licenses/LICENSE-2.0.

    In addition, We may use the following licenses for Documentation in the Contribution: GFDL-1.2 (including any right to adopt any future version of a license).

    We agree to license patents owned or controlled by You only to the extent necessary to (sub)license Your Contribution(s) and the combination of Your Contribution(s) with the Material under the terms of any licenses the Free Software Foundation classifies as Free Software licenses and which are approved by the Open Source Initiative as Open Source licenses..

    5. Disclaimer

    THE CONTRIBUTION IS PROVIDED “AS IS”. MORE PARTICULARLY, ALL EXPRESS OR IMPLIED WARRANTIES INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTY OF SATISFACTORY QUALITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT ARE EXPRESSLY DISCLAIMED BY YOU TO US AND BY US TO YOU. TO THE EXTENT THAT ANY SUCH WARRANTIES CANNOT BE DISCLAIMED, SUCH WARRANTY IS LIMITED IN DURATION AND EXTENT TO THE MINIMUM PERIOD AND EXTENT PERMITTED BY LAW.

    6. Consequential damage waiver

    TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT WILL YOU OR WE BE LIABLE FOR ANY LOSS OF PROFITS, LOSS OF ANTICIPATED SAVINGS, LOSS OF DATA, INDIRECT, SPECIAL, INCIDENTAL, CONSEQUENTIAL AND EXEMPLARY DAMAGES ARISING OUT OF THIS AGREEMENT REGARDLESS OF THE LEGAL OR EQUITABLE THEORY (CONTRACT, TORT OR OTHERWISE) UPON WHICH THE CLAIM IS BASED.

    7. Approximation of disclaimer and damage waiver

    IF THE DISCLAIMER AND DAMAGE WAIVER MENTIONED IN SECTION 5. AND SECTION 6. CANNOT BE GIVEN LEGAL EFFECT UNDER APPLICABLE LOCAL LAW, REVIEWING COURTS SHALL APPLY LOCAL LAW THAT MOST CLOSELY APPROXIMATES AN ABSOLUTE WAIVER OF ALL CIVIL OR CONTRACTUAL LIABILITY IN CONNECTION WITH THE CONTRIBUTION.

    8. Term

    8.1 This Agreement shall come into effect upon Your acceptance of the terms and conditions.

    8.2 This Agreement shall apply for the term of the copyright and patents licensed here. However, You shall have the right to terminate the Agreement if We do not fulfill the obligations as set forth in Section 4. Such termination must be made in writing.

    8.3 In the event of a termination of this Agreement Sections 5, 6, 7, 8 and 9 shall survive such termination and shall remain in full force thereafter. For the avoidance of doubt, Free and Open Source Software (sub)licenses that have already been granted for Contributions at the date of the termination shall remain in full force after the termination of this Agreement.

    9 Miscellaneous

    9.1 This Agreement and all disputes, claims, actions, suits or other proceedings arising out of this agreement or relating in any way to it shall be governed by the laws of China excluding its private international law provisions.

    9.2 This Agreement sets out the entire agreement between You and Us for Your Contributions to Us and overrides all other agreements or understandings.

    9.3 In case of Your death, this agreement shall continue with Your heirs. In case of more than one heir, all heirs must exercise their rights through a commonly authorized person.

    9.4 If any provision of this Agreement is found void and unenforceable, such provision will be replaced to the extent possible with a provision that comes closest to the meaning of the original provision and that is enforceable. The terms and conditions set forth in this Agreement shall apply notwithstanding any failure of essential purpose of this Agreement or any limited remedy to the maximum extent possible under law.

    9.5 You agree to notify Us of any facts or circumstances of which you become aware that would make this Agreement inaccurate in any respect.


    diff --git a/docs/clients/_print/index.html b/docs/clients/_print/index.html index 0260a58e5..4a509ffbc 100644 --- a/docs/clients/_print/index.html +++ b/docs/clients/_print/index.html @@ -4560,7 +4560,7 @@ gremlin> :> @script ==>6 -

    For more information on the use of gremlin-console, please refer to Tinkerpop Official Website

    +

    For more information on the use of gremlin-console, please refer to Tinkerpop Official Website

    diff --git a/docs/clients/gremlin-console/index.html b/docs/clients/gremlin-console/index.html index 7b7e066a1..d9beb682c 100644 --- a/docs/clients/gremlin-console/index.html +++ b/docs/clients/gremlin-console/index.html @@ -233,7 +233,7 @@ gremlin> :> @script ==>6 -

    For more information on the use of gremlin-console, please refer to Tinkerpop Official Website


    Last modified May 25, 2022: fix format (e4e5b5b)
    +

    For more information on the use of gremlin-console, please refer to Tinkerpop Official Website


    Last modified May 25, 2022: fix format (e4e5b5b)
    diff --git a/docs/clients/hugegraph-client/index.html b/docs/clients/hugegraph-client/index.html index 2b8e14a14..9d574c26c 100644 --- a/docs/clients/hugegraph-client/index.html +++ b/docs/clients/hugegraph-client/index.html @@ -83,7 +83,7 @@

    3 Graph

    3.1 Vertex

    Vertices are the most basic elements of a graph, and there can be many vertices in a graph. Here is an example of adding vertices:

    Vertex marko = graph.addVertex(T.label, "person", "name", "marko", "age", 29);
     Vertex lop = graph.addVertex(T.label, "software", "name", "lop", "lang", "java", "price", 328);
     

    3.2 Edge

    After added vertices, edges are also needed to form a complete graph. Here is an example of adding edges:

    Edge knows1 = marko.addEdge("knows", vadas, "city", "Beijing");
    -

    Note: When frequency is multiple, the value of the property type corresponding to sortKeys must be set.

    4 Examples

    Simple examples can reference HugeGraph-Client


    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    +

    Note: When frequency is multiple, the value of the property type corresponding to sortKeys must be set.

    4 Examples

    Simple examples can reference HugeGraph-Client


    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    diff --git a/docs/clients/index.html b/docs/clients/index.html index 0f75a8b9e..04e2559d2 100644 --- a/docs/clients/index.html +++ b/docs/clients/index.html @@ -4,7 +4,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    API


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    API


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/clients/restful-api/_print/index.html b/docs/clients/restful-api/_print/index.html index 1474e0755..0330a618f 100644 --- a/docs/clients/restful-api/_print/index.html +++ b/docs/clients/restful-api/_print/index.html @@ -4257,7 +4257,7 @@ "api": "0.13.2.0" } } - + diff --git a/docs/clients/restful-api/auth/index.html b/docs/clients/restful-api/auth/index.html index df168baaa..3821d5d38 100644 --- a/docs/clients/restful-api/auth/index.html +++ b/docs/clients/restful-api/auth/index.html @@ -406,7 +406,7 @@ "group": "-69:all", "target": "-77:all" } -
    Last modified April 17, 2022: rebuild doc (ef36544)
    +
    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/clients/restful-api/edge/index.html b/docs/clients/restful-api/edge/index.html index 95b2f413f..24706dd5e 100644 --- a/docs/clients/restful-api/edge/index.html +++ b/docs/clients/restful-api/edge/index.html @@ -354,7 +354,7 @@
    Response Status
    204
     

    根据Label+Id删除边

    通过指定Label参数和Id来删除边时,一般来说其性能比仅根据Id删除会更好。

    Method & Url
    DELETE http://localhost:8080/graphs/hugegraph/graph/edges/S1:peter>1>>S2:lop?label=person
     
    Response Status
    204
    -

    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    +
    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    diff --git a/docs/clients/restful-api/edgelabel/index.html b/docs/clients/restful-api/edgelabel/index.html index e55e57901..59fe9a712 100644 --- a/docs/clients/restful-api/edgelabel/index.html +++ b/docs/clients/restful-api/edgelabel/index.html @@ -194,7 +194,7 @@
    Response Body
    {
         "task_id": 1
     }
    -

    注:

    可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API


    Last modified April 17, 2022: rebuild doc (ef36544)
    +

    注:

    可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/clients/restful-api/graphs/index.html b/docs/clients/restful-api/graphs/index.html index 6e04f5883..73f1e9397 100644 --- a/docs/clients/restful-api/graphs/index.html +++ b/docs/clients/restful-api/graphs/index.html @@ -121,7 +121,7 @@ "local": "OK" } } -
    Last modified May 27, 2022: divide create graph into clone and create (665739b)
    +
    Last modified May 27, 2022: divide create graph into clone and create (665739b)
    diff --git a/docs/clients/restful-api/gremlin/index.html b/docs/clients/restful-api/gremlin/index.html index c7f11cbd3..c09977b34 100644 --- a/docs/clients/restful-api/gremlin/index.html +++ b/docs/clients/restful-api/gremlin/index.html @@ -141,7 +141,7 @@
    Response Body
    {
     	"task_id": 2
     }
    -

    注:

    可以通过GET http://localhost:8080/graphs/hugegraph/tasks/2(其中"2"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API


    Last modified April 17, 2022: rebuild doc (ef36544)
    +

    注:

    可以通过GET http://localhost:8080/graphs/hugegraph/tasks/2(其中"2"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/clients/restful-api/index.html b/docs/clients/restful-api/index.html index 4cf26e49e..496589a5b 100644 --- a/docs/clients/restful-api/index.html +++ b/docs/clients/restful-api/index.html @@ -7,7 +7,7 @@ Create documentation issue Create project issue Print entire section

    HugeGraph RESTful API

    HugeGraph-Server通过HugeGraph-API基于HTTP协议为Client提供操作图的接口,主要包括元数据和 -图数据的增删改查,遍历算法,变量,图操作及其他操作。


    Last modified April 17, 2022: rebuild doc (ef36544)
    +图数据的增删改查,遍历算法,变量,图操作及其他操作。


    Schema API

    PropertyKey API

    VertexLabel API

    EdgeLabel API

    IndexLabel API

    Rebuild API

    Vertex API

    Edge API

    Traverser API

    Rank API

    Variable API

    Graphs API

    Task API

    Gremlin API

    Authentication API

    Other API


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/clients/restful-api/indexlabel/index.html b/docs/clients/restful-api/indexlabel/index.html index ab2edad41..04bb0fbeb 100644 --- a/docs/clients/restful-api/indexlabel/index.html +++ b/docs/clients/restful-api/indexlabel/index.html @@ -99,7 +99,7 @@
    Response Body
    {
         "task_id": 1
     }
    -

    注:

    可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API


    Last modified April 17, 2022: rebuild doc (ef36544)
    +

    注:

    可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/clients/restful-api/other/index.html b/docs/clients/restful-api/other/index.html index 1163299b7..34dbe921e 100644 --- a/docs/clients/restful-api/other/index.html +++ b/docs/clients/restful-api/other/index.html @@ -22,7 +22,7 @@ "api": "0.13.2.0" } } -
    Last modified April 17, 2022: rebuild doc (ef36544)
    +
    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/clients/restful-api/propertykey/index.html b/docs/clients/restful-api/propertykey/index.html index e596db19b..e9f01ff4c 100644 --- a/docs/clients/restful-api/propertykey/index.html +++ b/docs/clients/restful-api/propertykey/index.html @@ -149,7 +149,7 @@
    Response Body
    {
         "task_id" : 0
     }
    -

    Last modified May 12, 2022: fix: bad request body simple in propertykey.md (1c933ca)
    +
    Last modified May 12, 2022: fix: bad request body simple in propertykey.md (1c933ca)
    diff --git a/docs/clients/restful-api/rank/index.html b/docs/clients/restful-api/rank/index.html index 9750e628e..3965496e2 100644 --- a/docs/clients/restful-api/rank/index.html +++ b/docs/clients/restful-api/rank/index.html @@ -255,7 +255,7 @@ } ] } -
    4.2.2.3 Suitable Scenario

    Find the vertices in different layers for a given start point that should be most recommended


    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    +
    4.2.2.3 Suitable Scenario

    Find the vertices in different layers for a given start point that should be most recommended


    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    diff --git a/docs/clients/restful-api/rebuild/index.html b/docs/clients/restful-api/rebuild/index.html index 620b25840..8bb1d3c0a 100644 --- a/docs/clients/restful-api/rebuild/index.html +++ b/docs/clients/restful-api/rebuild/index.html @@ -32,7 +32,7 @@
    Response Body
    {
         "task_id": 3
     }
    -

    Note:

    You can get the asynchronous job status by GET http://localhost:8080/graphs/hugegraph/tasks/${task_id} (the task_id here should be 3). See More AsyncJob RESTfull API


    Last modified May 9, 2022: pull-130 fix the reviewed problems (617b0dc)
    +

    Note:

    You can get the asynchronous job status by GET http://localhost:8080/graphs/hugegraph/tasks/${task_id} (the task_id here should be 3). See More AsyncJob RESTfull API


    Last modified May 9, 2022: pull-130 fix the reviewed problems (617b0dc)
    diff --git a/docs/clients/restful-api/schema/index.html b/docs/clients/restful-api/schema/index.html index 17a4e2fc3..856b79c78 100644 --- a/docs/clients/restful-api/schema/index.html +++ b/docs/clients/restful-api/schema/index.html @@ -308,7 +308,7 @@ } ] } -
    Last modified April 17, 2022: rebuild doc (ef36544)
    +
    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/clients/restful-api/task/index.html b/docs/clients/restful-api/task/index.html index 26f691031..06aaf447c 100644 --- a/docs/clients/restful-api/task/index.html +++ b/docs/clients/restful-api/task/index.html @@ -60,7 +60,7 @@
    Response Body
    {
         "cancelled": true
     }
    -

    At this point, the number of vertices whose label is man must be less than 10.


    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    +

    At this point, the number of vertices whose label is man must be less than 10.


    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    diff --git a/docs/clients/restful-api/traverser/index.html b/docs/clients/restful-api/traverser/index.html index 1cf0d0611..e8f3fbae8 100644 --- a/docs/clients/restful-api/traverser/index.html +++ b/docs/clients/restful-api/traverser/index.html @@ -1720,7 +1720,7 @@ } ] } -
    3.2.23.4 适用场景

    Last modified April 17, 2022: rebuild doc (ef36544)
    +
    3.2.23.4 适用场景

    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/clients/restful-api/variable/index.html b/docs/clients/restful-api/variable/index.html index 1f97f0fd3..68a4a7f81 100644 --- a/docs/clients/restful-api/variable/index.html +++ b/docs/clients/restful-api/variable/index.html @@ -31,7 +31,7 @@ }

    5.1.4 删除某个键值对

    Method & Url
    DELETE http://localhost:8080/graphs/hugegraph/variables/name
     
    Response Status
    204
    -

    Last modified April 17, 2022: rebuild doc (ef36544)
    +
    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/clients/restful-api/vertex/index.html b/docs/clients/restful-api/vertex/index.html index 5b448dc54..ea50e2f59 100644 --- a/docs/clients/restful-api/vertex/index.html +++ b/docs/clients/restful-api/vertex/index.html @@ -422,7 +422,7 @@
    Response Status
    204
     

    根据Label+Id删除顶点

    通过指定Label参数和Id来删除顶点时,一般来说其性能比仅根据Id删除会更好。

    Method & Url
    DELETE http://localhost:8080/graphs/hugegraph/graph/vertices/"1:marko"?label=person
     
    Response Status
    204
    -

    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    +
    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    diff --git a/docs/clients/restful-api/vertexlabel/index.html b/docs/clients/restful-api/vertexlabel/index.html index 18ec4fe04..09693f5e3 100644 --- a/docs/clients/restful-api/vertexlabel/index.html +++ b/docs/clients/restful-api/vertexlabel/index.html @@ -190,7 +190,7 @@
    Response Body
    {
         "task_id": 1
     }
    -

    注:

    可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API


    Last modified April 17, 2022: rebuild doc (ef36544)
    +

    注:

    可以通过GET http://localhost:8080/graphs/hugegraph/tasks/1(其中"1"是task_id)来查询异步任务的执行状态,更多异步任务RESTful API


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/config/_print/index.html b/docs/config/_print/index.html index 74f038926..290a79586 100644 --- a/docs/config/_print/index.html +++ b/docs/config/_print/index.html @@ -265,7 +265,7 @@ 国家代码:CN
    1. 根据服务端私钥,导出服务端证书
    keytool -export -alias serverkey -keystore server.keystore -file server.crt
     

    server.crt 就是服务端的证书

    客户端

    keytool -import -alias serverkey -file server.crt -keystore client.truststore
    -

    client.truststore 是给客户端⽤的,其中保存着受信任的证书

    5 - HugeGraph-Computer Config

    Computer Config Options

    config optiondefault valuedescription
    algorithm.message_classorg.apache.hugegraph.computer.core.config.NullThe class of message passed when compute vertex.
    algorithm.params_classorg.apache.hugegraph.computer.core.config.NullThe class used to transfer algorithms’ parameters before algorithm been run.
    algorithm.result_classorg.apache.hugegraph.computer.core.config.NullThe class of vertex’s value, the instance is used to store computation result for the vertex.
    allocator.max_vertices_per_thread10000Maximum number of vertices per thread processed in each memory allocator
    bsp.etcd_endpointshttp://localhost:2379The end points to access etcd.
    bsp.log_interval30000The log interval(in ms) to print the log while waiting bsp event.
    bsp.max_super_step10The max super step of the algorithm.
    bsp.register_timeout300000The max timeout to wait for master and works to register.
    bsp.wait_master_timeout86400000The max timeout(in ms) to wait for master bsp event.
    bsp.wait_workers_timeout86400000The max timeout to wait for workers bsp event.
    hgkv.max_data_block_size65536The max byte size of hgkv-file data block.
    hgkv.max_file_size2147483648The max number of bytes in each hgkv-file.
    hgkv.max_merge_files10The max number of files to merge at one time.
    hgkv.temp_file_dir/tmp/hgkvThis folder is used to store temporary files, temporary files will be generated during the file merging process.
    hugegraph.namehugegraphThe graph name to load data and write results back.
    hugegraph.urlhttp://127.0.0.1:8080The hugegraph url to load data and write results back.
    input.edge_directionOUTThe data of the edge in which direction is loaded, when the value is BOTH, the edges in both OUT and IN direction will be loaded.
    input.edge_freqMULTIPLEThe frequency of edges can exist between a pair of vertices, allowed values: [SINGLE, SINGLE_PER_LABEL, MULTIPLE]. SINGLE means that only one edge can exist between a pair of vertices, use sourceId + targetId to identify it; SINGLE_PER_LABEL means that each edge label can exist one edge between a pair of vertices, use sourceId + edgelabel + targetId to identify it; MULTIPLE means that many edge can exist between a pair of vertices, use sourceId + edgelabel + sortValues + targetId to identify it.
    input.filter_classorg.apache.hugegraph.computer.core.input.filter.DefaultInputFilterThe class to create input-filter object, input-filter is used to Filter vertex edges according to user needs.
    input.loader_schema_pathThe schema path of loader input, only takes effect when the input.source_type=loader is enabled
    input.loader_struct_pathThe struct path of loader input, only takes effect when the input.source_type=loader is enabled
    input.max_edges_in_one_vertex200The maximum number of adjacent edges allowed to be attached to a vertex, the adjacent edges will be stored and transferred together as a batch unit.
    input.source_typehugegraph-serverThe source type to load input data, allowed values: [‘hugegraph-server’, ‘hugegraph-loader’], the ‘hugegraph-loader’ means use hugegraph-loader load data from HDFS or file, if use ‘hugegraph-loader’ load data then please config ‘input.loader_struct_path’ and ‘input.loader_schema_path’.
    input.split_fetch_timeout300The timeout in seconds to fetch input splits
    input.split_max_splits10000000The maximum number of input splits
    input.split_page_size500The page size for streamed load input split data
    input.split_size1048576The input split size in bytes
    job.idlocal_0001The job id on Yarn cluster or K8s cluster.
    job.partitions_count1The partitions count for computing one graph algorithm job.
    job.partitions_thread_nums4The number of threads for partition parallel compute.
    job.workers_count1The workers count for computing one graph algorithm job.
    master.computation_classorg.apache.hugegraph.computer.core.master.DefaultMasterComputationMaster-computation is computation that can determine whether to continue next superstep. It runs at the end of each superstep on master.
    output.batch_size500The batch size of output
    output.batch_threads1The threads number used to batch output
    output.hdfs_core_site_pathThe hdfs core site path.
    output.hdfs_delimiter,The delimiter of hdfs output.
    output.hdfs_kerberos_enablefalseIs Kerberos authentication enabled for Hdfs.
    output.hdfs_kerberos_keytabThe Hdfs’s key tab file for kerberos authentication.
    output.hdfs_kerberos_principalThe Hdfs’s principal for kerberos authentication.
    output.hdfs_krb5_conf/etc/krb5.confKerberos configuration file.
    output.hdfs_merge_partitionstrueWhether merge output files of multiple partitions.
    output.hdfs_path_prefix/hugegraph-computer/resultsThe directory of hdfs output result.
    output.hdfs_replication3The replication number of hdfs.
    output.hdfs_site_pathThe hdfs site path.
    output.hdfs_urlhdfs://127.0.0.1:9000The hdfs url of output.
    output.hdfs_userhadoopThe hdfs user of output.
    output.output_classorg.apache.hugegraph.computer.core.output.LogOutputThe class to output the computation result of each vertex. Be called after iteration computation.
    output.result_namevalueThe value is assigned dynamically by #name() of instance created by WORKER_COMPUTATION_CLASS.
    output.result_write_typeOLAP_COMMONThe result write-type to output to hugegraph, allowed values are: [OLAP_COMMON, OLAP_SECONDARY, OLAP_RANGE].
    output.retry_interval10The retry interval when output failed
    output.retry_times3The retry times when output failed
    output.single_threads1The threads number used to single output
    output.thread_pool_shutdown_timeout60The timeout seconds of output threads pool shutdown
    output.with_adjacent_edgesfalseOutput the adjacent edges of the vertex or not
    output.with_edge_propertiesfalseOutput the properties of the edge or not
    output.with_vertex_propertiesfalseOutput the properties of the vertex or not
    sort.thread_nums4The number of threads performing internal sorting.
    transport.client_connect_timeout3000The timeout(in ms) of client connect to server.
    transport.client_threads4The number of transport threads for client.
    transport.close_timeout10000The timeout(in ms) of close server or close client.
    transport.finish_session_timeout0The timeout(in ms) to finish session, 0 means using (transport.sync_request_timeout * transport.max_pending_requests).
    transport.heartbeat_interval20000The minimum interval(in ms) between heartbeats on client side.
    transport.io_modeAUTOThe network IO Mode, either ‘NIO’, ‘EPOLL’, ‘AUTO’, the ‘AUTO’ means selecting the property mode automatically.
    transport.max_pending_requests8The max number of client unreceived ack, it will trigger the sending unavailable if the number of unreceived ack >= max_pending_requests.
    transport.max_syn_backlog511The capacity of SYN queue on server side, 0 means using system default value.
    transport.max_timeout_heartbeat_count120The maximum times of timeout heartbeat on client side, if the number of timeouts waiting for heartbeat response continuously > max_heartbeat_timeouts the channel will be closed from client side.
    transport.min_ack_interval200The minimum interval(in ms) of server reply ack.
    transport.min_pending_requests6The minimum number of client unreceived ack, it will trigger the sending available if the number of unreceived ack < min_pending_requests.
    transport.network_retries3The number of retry attempts for network communication,if network unstable.
    transport.provider_classorg.apache.hugegraph.computer.core.network.netty.NettyTransportProviderThe transport provider, currently only supports Netty.
    transport.receive_buffer_size0The size of socket receive-buffer in bytes, 0 means using system default value.
    transport.recv_file_modetrueWhether enable receive buffer-file mode, it will receive buffer write file from socket by zero-copy if enable.
    transport.send_buffer_size0The size of socket send-buffer in bytes, 0 means using system default value.
    transport.server_host127.0.0.1The server hostname or ip to listen on to transfer data.
    transport.server_idle_timeout360000The max timeout(in ms) of server idle.
    transport.server_port0The server port to listen on to transfer data. The system will assign a random port if it’s set to 0.
    transport.server_threads4The number of transport threads for server.
    transport.sync_request_timeout10000The timeout(in ms) to wait response after sending sync-request.
    transport.tcp_keep_alivetrueWhether enable TCP keep-alive.
    transport.transport_epoll_ltfalseWhether enable EPOLL level-trigger.
    transport.write_buffer_high_mark67108864The high water mark for write buffer in bytes, it will trigger the sending unavailable if the number of queued bytes > write_buffer_high_mark.
    transport.write_buffer_low_mark33554432The low water mark for write buffer in bytes, it will trigger the sending available if the number of queued bytes < write_buffer_low_mark.org.apache.hugegraph.config.OptionChecker$$Lambda$97/0x00000008001c8440@776a6d9b
    transport.write_socket_timeout3000The timeout(in ms) to write data to socket buffer.
    valuefile.max_segment_size1073741824The max number of bytes in each segment of value-file.
    worker.combiner_classorg.apache.hugegraph.computer.core.config.NullCombiner can combine messages into one value for a vertex, for example page-rank algorithm can combine messages of a vertex to a sum value.
    worker.computation_classorg.apache.hugegraph.computer.core.config.NullThe class to create worker-computation object, worker-computation is used to compute each vertex in each superstep.
    worker.data_dirs[jobs]The directories separated by ‘,’ that received vertices and messages can persist into.
    worker.edge_properties_combiner_classorg.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombinerThe combiner can combine several properties of the same edge into one properties at inputstep.
    worker.partitionerorg.apache.hugegraph.computer.core.graph.partition.HashPartitionerThe partitioner that decides which partition a vertex should be in, and which worker a partition should be in.
    worker.received_buffers_bytes_limit104857600The limit bytes of buffers of received data, the total size of all buffers can’t excess this limit. If received buffers reach this limit, they will be merged into a file.
    worker.vertex_properties_combiner_classorg.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombinerThe combiner can combine several properties of the same vertex into one properties at inputstep.
    worker.wait_finish_messages_timeout86400000The max timeout(in ms) message-handler wait for finish-message of all workers.
    worker.wait_sort_timeout600000The max timeout(in ms) message-handler wait for sort-thread to sort one batch of buffers.
    worker.write_buffer_capacity52428800The initial size of write buffer that used to store vertex or message.
    worker.write_buffer_threshold52428800The threshold of write buffer, exceeding it will trigger sorting, the write buffer is used to store vertex or message.

    K8s Operator Config Options

    NOTE: Option needs to be converted through environment variable settings, e.g k8s.internal_etcd_url => INTERNAL_ETCD_URL

    config optiondefault valuedescription
    k8s.auto_destroy_podtrueWhether to automatically destroy all pods when the job is completed or failed.
    k8s.close_reconciler_timeout120The max timeout(in ms) to close reconciler.
    k8s.internal_etcd_urlhttp://127.0.0.1:2379The internal etcd url for operator system.
    k8s.max_reconcile_retry3The max retry times of reconcile.
    k8s.probe_backlog50The maximum backlog for serving health probes.
    k8s.probe_port9892The value is the port that the controller bind to for serving health probes.
    k8s.ready_check_internal1000The time interval(ms) of check ready.
    k8s.ready_timeout30000The max timeout(in ms) of check ready.
    k8s.reconciler_count10The max number of reconciler thread.
    k8s.resync_period600000The minimum frequency at which watched resources are reconciled.
    k8s.timezoneAsia/ShanghaiThe timezone of computer job and operator.
    k8s.watch_namespacehugegraph-computer-systemThe value is watch custom resources in the namespace, ignore other namespaces, the ‘*’ means is all namespaces will be watched.

    HugeGraph-Computer CRD

    CRD: https://github.com/apache/hugegraph-computer/blob/master/computer-k8s-operator/manifest/hugegraph-computer-crd.v1.yaml

    specdefault valuedescriptionrequired
    algorithmNameThe name of algorithm.true
    jobIdThe job id.true
    imageThe image of algorithm.true
    computerConfThe map of computer config options.true
    workerInstancesThe number of worker instances, it will instead the ‘job.workers_count’ option.true
    pullPolicyAlwaysThe pull-policy of image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policyfalse
    pullSecretsThe pull-secrets of Image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-podfalse
    masterCpuThe cpu limit of master, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpufalse
    workerCpuThe cpu limit of worker, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpufalse
    masterMemoryThe memory limit of master, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memoryfalse
    workerMemoryThe memory limit of worker, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memoryfalse
    log4jXmlThe content of log4j.xml for computer job.false
    jarFileThe jar path of computer algorithm.false
    remoteJarUriThe remote jar uri of computer algorithm, it will overlay algorithm image.false
    jvmOptionsThe java startup parameters of computer job.false
    envVarsplease refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/false
    envFromplease refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/false
    masterCommandbin/start-computer.shThe run command of master, equivalent to ‘Entrypoint’ field of Docker.false
    masterArgs["-r master", “-d k8s”]The run args of master, equivalent to ‘Cmd’ field of Docker.false
    workerCommandbin/start-computer.shThe run command of worker, equivalent to ‘Entrypoint’ field of Docker.false
    workerArgs["-r worker", “-d k8s”]The run args of worker, equivalent to ‘Cmd’ field of Docker.false
    volumesPlease refer to: https://kubernetes.io/docs/concepts/storage/volumes/false
    volumeMountsPlease refer to: https://kubernetes.io/docs/concepts/storage/volumes/false
    secretPathsThe map of k8s-secret name and mount path.false
    configMapPathsThe map of k8s-configmap name and mount path.false
    podTemplateSpecPlease refer to: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-template-v1/#PodTemplateSpecfalse
    securityContextPlease refer to: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/false

    KubeDriver Config Options

    config optiondefault valuedescription
    k8s.build_image_bash_pathThe path of command used to build image.
    k8s.enable_internal_algorithmtrueWhether enable internal algorithm.
    k8s.framework_image_urlhugegraph/hugegraph-computer:latestThe image url of computer framework.
    k8s.image_repository_passwordThe password for login image repository.
    k8s.image_repository_registryThe address for login image repository.
    k8s.image_repository_urlhugegraph/hugegraph-computerThe url of image repository.
    k8s.image_repository_usernameThe username for login image repository.
    k8s.internal_algorithm[pageRank]The name list of all internal algorithm.
    k8s.internal_algorithm_image_urlhugegraph/hugegraph-computer:latestThe image url of internal algorithm.
    k8s.jar_file_dir/cache/jars/The directory where the algorithm jar to upload location.
    k8s.kube_config~/.kube/configThe path of k8s config file.
    k8s.log4j_xml_pathThe log4j.xml path for computer job.
    k8s.namespacehugegraph-computer-systemThe namespace of hugegraph-computer system.
    k8s.pull_secret_names[]The names of pull-secret for pulling image.
    +

    client.truststore 是给客户端⽤的,其中保存着受信任的证书

    5 - HugeGraph-Computer Config

    Computer Config Options

    config optiondefault valuedescription
    algorithm.message_classorg.apache.hugegraph.computer.core.config.NullThe class of message passed when compute vertex.
    algorithm.params_classorg.apache.hugegraph.computer.core.config.NullThe class used to transfer algorithms’ parameters before algorithm been run.
    algorithm.result_classorg.apache.hugegraph.computer.core.config.NullThe class of vertex’s value, the instance is used to store computation result for the vertex.
    allocator.max_vertices_per_thread10000Maximum number of vertices per thread processed in each memory allocator
    bsp.etcd_endpointshttp://localhost:2379The end points to access etcd.
    bsp.log_interval30000The log interval(in ms) to print the log while waiting bsp event.
    bsp.max_super_step10The max super step of the algorithm.
    bsp.register_timeout300000The max timeout to wait for master and works to register.
    bsp.wait_master_timeout86400000The max timeout(in ms) to wait for master bsp event.
    bsp.wait_workers_timeout86400000The max timeout to wait for workers bsp event.
    hgkv.max_data_block_size65536The max byte size of hgkv-file data block.
    hgkv.max_file_size2147483648The max number of bytes in each hgkv-file.
    hgkv.max_merge_files10The max number of files to merge at one time.
    hgkv.temp_file_dir/tmp/hgkvThis folder is used to store temporary files, temporary files will be generated during the file merging process.
    hugegraph.namehugegraphThe graph name to load data and write results back.
    hugegraph.urlhttp://127.0.0.1:8080The hugegraph url to load data and write results back.
    input.edge_directionOUTThe data of the edge in which direction is loaded, when the value is BOTH, the edges in both OUT and IN direction will be loaded.
    input.edge_freqMULTIPLEThe frequency of edges can exist between a pair of vertices, allowed values: [SINGLE, SINGLE_PER_LABEL, MULTIPLE]. SINGLE means that only one edge can exist between a pair of vertices, use sourceId + targetId to identify it; SINGLE_PER_LABEL means that each edge label can exist one edge between a pair of vertices, use sourceId + edgelabel + targetId to identify it; MULTIPLE means that many edge can exist between a pair of vertices, use sourceId + edgelabel + sortValues + targetId to identify it.
    input.filter_classorg.apache.hugegraph.computer.core.input.filter.DefaultInputFilterThe class to create input-filter object, input-filter is used to Filter vertex edges according to user needs.
    input.loader_schema_pathThe schema path of loader input, only takes effect when the input.source_type=loader is enabled
    input.loader_struct_pathThe struct path of loader input, only takes effect when the input.source_type=loader is enabled
    input.max_edges_in_one_vertex200The maximum number of adjacent edges allowed to be attached to a vertex, the adjacent edges will be stored and transferred together as a batch unit.
    input.source_typehugegraph-serverThe source type to load input data, allowed values: [‘hugegraph-server’, ‘hugegraph-loader’], the ‘hugegraph-loader’ means use hugegraph-loader load data from HDFS or file, if use ‘hugegraph-loader’ load data then please config ‘input.loader_struct_path’ and ‘input.loader_schema_path’.
    input.split_fetch_timeout300The timeout in seconds to fetch input splits
    input.split_max_splits10000000The maximum number of input splits
    input.split_page_size500The page size for streamed load input split data
    input.split_size1048576The input split size in bytes
    job.idlocal_0001The job id on Yarn cluster or K8s cluster.
    job.partitions_count1The partitions count for computing one graph algorithm job.
    job.partitions_thread_nums4The number of threads for partition parallel compute.
    job.workers_count1The workers count for computing one graph algorithm job.
    master.computation_classorg.apache.hugegraph.computer.core.master.DefaultMasterComputationMaster-computation is computation that can determine whether to continue next superstep. It runs at the end of each superstep on master.
    output.batch_size500The batch size of output
    output.batch_threads1The threads number used to batch output
    output.hdfs_core_site_pathThe hdfs core site path.
    output.hdfs_delimiter,The delimiter of hdfs output.
    output.hdfs_kerberos_enablefalseIs Kerberos authentication enabled for Hdfs.
    output.hdfs_kerberos_keytabThe Hdfs’s key tab file for kerberos authentication.
    output.hdfs_kerberos_principalThe Hdfs’s principal for kerberos authentication.
    output.hdfs_krb5_conf/etc/krb5.confKerberos configuration file.
    output.hdfs_merge_partitionstrueWhether merge output files of multiple partitions.
    output.hdfs_path_prefix/hugegraph-computer/resultsThe directory of hdfs output result.
    output.hdfs_replication3The replication number of hdfs.
    output.hdfs_site_pathThe hdfs site path.
    output.hdfs_urlhdfs://127.0.0.1:9000The hdfs url of output.
    output.hdfs_userhadoopThe hdfs user of output.
    output.output_classorg.apache.hugegraph.computer.core.output.LogOutputThe class to output the computation result of each vertex. Be called after iteration computation.
    output.result_namevalueThe value is assigned dynamically by #name() of instance created by WORKER_COMPUTATION_CLASS.
    output.result_write_typeOLAP_COMMONThe result write-type to output to hugegraph, allowed values are: [OLAP_COMMON, OLAP_SECONDARY, OLAP_RANGE].
    output.retry_interval10The retry interval when output failed
    output.retry_times3The retry times when output failed
    output.single_threads1The threads number used to single output
    output.thread_pool_shutdown_timeout60The timeout seconds of output threads pool shutdown
    output.with_adjacent_edgesfalseOutput the adjacent edges of the vertex or not
    output.with_edge_propertiesfalseOutput the properties of the edge or not
    output.with_vertex_propertiesfalseOutput the properties of the vertex or not
    sort.thread_nums4The number of threads performing internal sorting.
    transport.client_connect_timeout3000The timeout(in ms) of client connect to server.
    transport.client_threads4The number of transport threads for client.
    transport.close_timeout10000The timeout(in ms) of close server or close client.
    transport.finish_session_timeout0The timeout(in ms) to finish session, 0 means using (transport.sync_request_timeout * transport.max_pending_requests).
    transport.heartbeat_interval20000The minimum interval(in ms) between heartbeats on client side.
    transport.io_modeAUTOThe network IO Mode, either ‘NIO’, ‘EPOLL’, ‘AUTO’, the ‘AUTO’ means selecting the property mode automatically.
    transport.max_pending_requests8The max number of client unreceived ack, it will trigger the sending unavailable if the number of unreceived ack >= max_pending_requests.
    transport.max_syn_backlog511The capacity of SYN queue on server side, 0 means using system default value.
    transport.max_timeout_heartbeat_count120The maximum times of timeout heartbeat on client side, if the number of timeouts waiting for heartbeat response continuously > max_heartbeat_timeouts the channel will be closed from client side.
    transport.min_ack_interval200The minimum interval(in ms) of server reply ack.
    transport.min_pending_requests6The minimum number of client unreceived ack, it will trigger the sending available if the number of unreceived ack < min_pending_requests.
    transport.network_retries3The number of retry attempts for network communication,if network unstable.
    transport.provider_classorg.apache.hugegraph.computer.core.network.netty.NettyTransportProviderThe transport provider, currently only supports Netty.
    transport.receive_buffer_size0The size of socket receive-buffer in bytes, 0 means using system default value.
    transport.recv_file_modetrueWhether enable receive buffer-file mode, it will receive buffer write file from socket by zero-copy if enable.
    transport.send_buffer_size0The size of socket send-buffer in bytes, 0 means using system default value.
    transport.server_host127.0.0.1The server hostname or ip to listen on to transfer data.
    transport.server_idle_timeout360000The max timeout(in ms) of server idle.
    transport.server_port0The server port to listen on to transfer data. The system will assign a random port if it’s set to 0.
    transport.server_threads4The number of transport threads for server.
    transport.sync_request_timeout10000The timeout(in ms) to wait response after sending sync-request.
    transport.tcp_keep_alivetrueWhether enable TCP keep-alive.
    transport.transport_epoll_ltfalseWhether enable EPOLL level-trigger.
    transport.write_buffer_high_mark67108864The high water mark for write buffer in bytes, it will trigger the sending unavailable if the number of queued bytes > write_buffer_high_mark.
    transport.write_buffer_low_mark33554432The low water mark for write buffer in bytes, it will trigger the sending available if the number of queued bytes < write_buffer_low_mark.org.apache.hugegraph.config.OptionChecker$$Lambda$97/0x00000008001c8440@776a6d9b
    transport.write_socket_timeout3000The timeout(in ms) to write data to socket buffer.
    valuefile.max_segment_size1073741824The max number of bytes in each segment of value-file.
    worker.combiner_classorg.apache.hugegraph.computer.core.config.NullCombiner can combine messages into one value for a vertex, for example page-rank algorithm can combine messages of a vertex to a sum value.
    worker.computation_classorg.apache.hugegraph.computer.core.config.NullThe class to create worker-computation object, worker-computation is used to compute each vertex in each superstep.
    worker.data_dirs[jobs]The directories separated by ‘,’ that received vertices and messages can persist into.
    worker.edge_properties_combiner_classorg.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombinerThe combiner can combine several properties of the same edge into one properties at inputstep.
    worker.partitionerorg.apache.hugegraph.computer.core.graph.partition.HashPartitionerThe partitioner that decides which partition a vertex should be in, and which worker a partition should be in.
    worker.received_buffers_bytes_limit104857600The limit bytes of buffers of received data, the total size of all buffers can’t excess this limit. If received buffers reach this limit, they will be merged into a file.
    worker.vertex_properties_combiner_classorg.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombinerThe combiner can combine several properties of the same vertex into one properties at inputstep.
    worker.wait_finish_messages_timeout86400000The max timeout(in ms) message-handler wait for finish-message of all workers.
    worker.wait_sort_timeout600000The max timeout(in ms) message-handler wait for sort-thread to sort one batch of buffers.
    worker.write_buffer_capacity52428800The initial size of write buffer that used to store vertex or message.
    worker.write_buffer_threshold52428800The threshold of write buffer, exceeding it will trigger sorting, the write buffer is used to store vertex or message.

    K8s Operator Config Options

    NOTE: Option needs to be converted through environment variable settings, e.g k8s.internal_etcd_url => INTERNAL_ETCD_URL

    config optiondefault valuedescription
    k8s.auto_destroy_podtrueWhether to automatically destroy all pods when the job is completed or failed.
    k8s.close_reconciler_timeout120The max timeout(in ms) to close reconciler.
    k8s.internal_etcd_urlhttp://127.0.0.1:2379The internal etcd url for operator system.
    k8s.max_reconcile_retry3The max retry times of reconcile.
    k8s.probe_backlog50The maximum backlog for serving health probes.
    k8s.probe_port9892The value is the port that the controller bind to for serving health probes.
    k8s.ready_check_internal1000The time interval(ms) of check ready.
    k8s.ready_timeout30000The max timeout(in ms) of check ready.
    k8s.reconciler_count10The max number of reconciler thread.
    k8s.resync_period600000The minimum frequency at which watched resources are reconciled.
    k8s.timezoneAsia/ShanghaiThe timezone of computer job and operator.
    k8s.watch_namespacehugegraph-computer-systemThe value is watch custom resources in the namespace, ignore other namespaces, the ‘*’ means is all namespaces will be watched.

    HugeGraph-Computer CRD

    CRD: https://github.com/apache/hugegraph-computer/blob/master/computer-k8s-operator/manifest/hugegraph-computer-crd.v1.yaml

    specdefault valuedescriptionrequired
    algorithmNameThe name of algorithm.true
    jobIdThe job id.true
    imageThe image of algorithm.true
    computerConfThe map of computer config options.true
    workerInstancesThe number of worker instances, it will instead the ‘job.workers_count’ option.true
    pullPolicyAlwaysThe pull-policy of image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policyfalse
    pullSecretsThe pull-secrets of Image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-podfalse
    masterCpuThe cpu limit of master, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpufalse
    workerCpuThe cpu limit of worker, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpufalse
    masterMemoryThe memory limit of master, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memoryfalse
    workerMemoryThe memory limit of worker, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memoryfalse
    log4jXmlThe content of log4j.xml for computer job.false
    jarFileThe jar path of computer algorithm.false
    remoteJarUriThe remote jar uri of computer algorithm, it will overlay algorithm image.false
    jvmOptionsThe java startup parameters of computer job.false
    envVarsplease refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/false
    envFromplease refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/false
    masterCommandbin/start-computer.shThe run command of master, equivalent to ‘Entrypoint’ field of Docker.false
    masterArgs["-r master", “-d k8s”]The run args of master, equivalent to ‘Cmd’ field of Docker.false
    workerCommandbin/start-computer.shThe run command of worker, equivalent to ‘Entrypoint’ field of Docker.false
    workerArgs["-r worker", “-d k8s”]The run args of worker, equivalent to ‘Cmd’ field of Docker.false
    volumesPlease refer to: https://kubernetes.io/docs/concepts/storage/volumes/false
    volumeMountsPlease refer to: https://kubernetes.io/docs/concepts/storage/volumes/false
    secretPathsThe map of k8s-secret name and mount path.false
    configMapPathsThe map of k8s-configmap name and mount path.false
    podTemplateSpecPlease refer to: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-template-v1/#PodTemplateSpecfalse
    securityContextPlease refer to: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/false

    KubeDriver Config Options

    config optiondefault valuedescription
    k8s.build_image_bash_pathThe path of command used to build image.
    k8s.enable_internal_algorithmtrueWhether enable internal algorithm.
    k8s.framework_image_urlhugegraph/hugegraph-computer:latestThe image url of computer framework.
    k8s.image_repository_passwordThe password for login image repository.
    k8s.image_repository_registryThe address for login image repository.
    k8s.image_repository_urlhugegraph/hugegraph-computerThe url of image repository.
    k8s.image_repository_usernameThe username for login image repository.
    k8s.internal_algorithm[pageRank]The name list of all internal algorithm.
    k8s.internal_algorithm_image_urlhugegraph/hugegraph-computer:latestThe image url of internal algorithm.
    k8s.jar_file_dir/cache/jars/The directory where the algorithm jar to upload location.
    k8s.kube_config~/.kube/configThe path of k8s config file.
    k8s.log4j_xml_pathThe log4j.xml path for computer job.
    k8s.namespacehugegraph-computer-systemThe namespace of hugegraph-computer system.
    k8s.pull_secret_names[]The names of pull-secret for pulling image.
    diff --git a/docs/config/config-authentication/index.html b/docs/config/config-authentication/index.html index b2cf8cd48..11dbf380e 100644 --- a/docs/config/config-authentication/index.html +++ b/docs/config/config-authentication/index.html @@ -54,7 +54,7 @@ auth.admin_token=token-value-a auth.user_tokens=[hugegraph1:token-value-1, hugegraph2:token-value-2]

    在配置文件hugegraph{n}.properties中配置gremlin.graph信息:

    gremlin.graph=com.baidu.hugegraph.auth.HugeFactoryAuthProxy
    -

    自定义用户认证系统

    如果需要支持更加灵活的用户系统,可自定义authenticator进行扩展,自定义authenticator实现接口com.baidu.hugegraph.auth.HugeAuthenticator即可,然后修改配置文件中authenticator配置项指向该实现。


    Last modified April 17, 2022: rebuild doc (ef36544)
    +

    自定义用户认证系统

    如果需要支持更加灵活的用户系统,可自定义authenticator进行扩展,自定义authenticator实现接口com.baidu.hugegraph.auth.HugeAuthenticator即可,然后修改配置文件中authenticator配置项指向该实现。


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/config/config-computer/index.html b/docs/config/config-computer/index.html index b2f026140..fb9a6691a 100644 --- a/docs/config/config-computer/index.html +++ b/docs/config/config-computer/index.html @@ -17,7 +17,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph-Computer Config

    Computer Config Options

    config optiondefault valuedescription
    algorithm.message_classorg.apache.hugegraph.computer.core.config.NullThe class of message passed when compute vertex.
    algorithm.params_classorg.apache.hugegraph.computer.core.config.NullThe class used to transfer algorithms’ parameters before algorithm been run.
    algorithm.result_classorg.apache.hugegraph.computer.core.config.NullThe class of vertex’s value, the instance is used to store computation result for the vertex.
    allocator.max_vertices_per_thread10000Maximum number of vertices per thread processed in each memory allocator
    bsp.etcd_endpointshttp://localhost:2379The end points to access etcd.
    bsp.log_interval30000The log interval(in ms) to print the log while waiting bsp event.
    bsp.max_super_step10The max super step of the algorithm.
    bsp.register_timeout300000The max timeout to wait for master and works to register.
    bsp.wait_master_timeout86400000The max timeout(in ms) to wait for master bsp event.
    bsp.wait_workers_timeout86400000The max timeout to wait for workers bsp event.
    hgkv.max_data_block_size65536The max byte size of hgkv-file data block.
    hgkv.max_file_size2147483648The max number of bytes in each hgkv-file.
    hgkv.max_merge_files10The max number of files to merge at one time.
    hgkv.temp_file_dir/tmp/hgkvThis folder is used to store temporary files, temporary files will be generated during the file merging process.
    hugegraph.namehugegraphThe graph name to load data and write results back.
    hugegraph.urlhttp://127.0.0.1:8080The hugegraph url to load data and write results back.
    input.edge_directionOUTThe data of the edge in which direction is loaded, when the value is BOTH, the edges in both OUT and IN direction will be loaded.
    input.edge_freqMULTIPLEThe frequency of edges can exist between a pair of vertices, allowed values: [SINGLE, SINGLE_PER_LABEL, MULTIPLE]. SINGLE means that only one edge can exist between a pair of vertices, use sourceId + targetId to identify it; SINGLE_PER_LABEL means that each edge label can exist one edge between a pair of vertices, use sourceId + edgelabel + targetId to identify it; MULTIPLE means that many edge can exist between a pair of vertices, use sourceId + edgelabel + sortValues + targetId to identify it.
    input.filter_classorg.apache.hugegraph.computer.core.input.filter.DefaultInputFilterThe class to create input-filter object, input-filter is used to Filter vertex edges according to user needs.
    input.loader_schema_pathThe schema path of loader input, only takes effect when the input.source_type=loader is enabled
    input.loader_struct_pathThe struct path of loader input, only takes effect when the input.source_type=loader is enabled
    input.max_edges_in_one_vertex200The maximum number of adjacent edges allowed to be attached to a vertex, the adjacent edges will be stored and transferred together as a batch unit.
    input.source_typehugegraph-serverThe source type to load input data, allowed values: [‘hugegraph-server’, ‘hugegraph-loader’], the ‘hugegraph-loader’ means use hugegraph-loader load data from HDFS or file, if use ‘hugegraph-loader’ load data then please config ‘input.loader_struct_path’ and ‘input.loader_schema_path’.
    input.split_fetch_timeout300The timeout in seconds to fetch input splits
    input.split_max_splits10000000The maximum number of input splits
    input.split_page_size500The page size for streamed load input split data
    input.split_size1048576The input split size in bytes
    job.idlocal_0001The job id on Yarn cluster or K8s cluster.
    job.partitions_count1The partitions count for computing one graph algorithm job.
    job.partitions_thread_nums4The number of threads for partition parallel compute.
    job.workers_count1The workers count for computing one graph algorithm job.
    master.computation_classorg.apache.hugegraph.computer.core.master.DefaultMasterComputationMaster-computation is computation that can determine whether to continue next superstep. It runs at the end of each superstep on master.
    output.batch_size500The batch size of output
    output.batch_threads1The threads number used to batch output
    output.hdfs_core_site_pathThe hdfs core site path.
    output.hdfs_delimiter,The delimiter of hdfs output.
    output.hdfs_kerberos_enablefalseIs Kerberos authentication enabled for Hdfs.
    output.hdfs_kerberos_keytabThe Hdfs’s key tab file for kerberos authentication.
    output.hdfs_kerberos_principalThe Hdfs’s principal for kerberos authentication.
    output.hdfs_krb5_conf/etc/krb5.confKerberos configuration file.
    output.hdfs_merge_partitionstrueWhether merge output files of multiple partitions.
    output.hdfs_path_prefix/hugegraph-computer/resultsThe directory of hdfs output result.
    output.hdfs_replication3The replication number of hdfs.
    output.hdfs_site_pathThe hdfs site path.
    output.hdfs_urlhdfs://127.0.0.1:9000The hdfs url of output.
    output.hdfs_userhadoopThe hdfs user of output.
    output.output_classorg.apache.hugegraph.computer.core.output.LogOutputThe class to output the computation result of each vertex. Be called after iteration computation.
    output.result_namevalueThe value is assigned dynamically by #name() of instance created by WORKER_COMPUTATION_CLASS.
    output.result_write_typeOLAP_COMMONThe result write-type to output to hugegraph, allowed values are: [OLAP_COMMON, OLAP_SECONDARY, OLAP_RANGE].
    output.retry_interval10The retry interval when output failed
    output.retry_times3The retry times when output failed
    output.single_threads1The threads number used to single output
    output.thread_pool_shutdown_timeout60The timeout seconds of output threads pool shutdown
    output.with_adjacent_edgesfalseOutput the adjacent edges of the vertex or not
    output.with_edge_propertiesfalseOutput the properties of the edge or not
    output.with_vertex_propertiesfalseOutput the properties of the vertex or not
    sort.thread_nums4The number of threads performing internal sorting.
    transport.client_connect_timeout3000The timeout(in ms) of client connect to server.
    transport.client_threads4The number of transport threads for client.
    transport.close_timeout10000The timeout(in ms) of close server or close client.
    transport.finish_session_timeout0The timeout(in ms) to finish session, 0 means using (transport.sync_request_timeout * transport.max_pending_requests).
    transport.heartbeat_interval20000The minimum interval(in ms) between heartbeats on client side.
    transport.io_modeAUTOThe network IO Mode, either ‘NIO’, ‘EPOLL’, ‘AUTO’, the ‘AUTO’ means selecting the property mode automatically.
    transport.max_pending_requests8The max number of client unreceived ack, it will trigger the sending unavailable if the number of unreceived ack >= max_pending_requests.
    transport.max_syn_backlog511The capacity of SYN queue on server side, 0 means using system default value.
    transport.max_timeout_heartbeat_count120The maximum times of timeout heartbeat on client side, if the number of timeouts waiting for heartbeat response continuously > max_heartbeat_timeouts the channel will be closed from client side.
    transport.min_ack_interval200The minimum interval(in ms) of server reply ack.
    transport.min_pending_requests6The minimum number of client unreceived ack, it will trigger the sending available if the number of unreceived ack < min_pending_requests.
    transport.network_retries3The number of retry attempts for network communication,if network unstable.
    transport.provider_classorg.apache.hugegraph.computer.core.network.netty.NettyTransportProviderThe transport provider, currently only supports Netty.
    transport.receive_buffer_size0The size of socket receive-buffer in bytes, 0 means using system default value.
    transport.recv_file_modetrueWhether enable receive buffer-file mode, it will receive buffer write file from socket by zero-copy if enable.
    transport.send_buffer_size0The size of socket send-buffer in bytes, 0 means using system default value.
    transport.server_host127.0.0.1The server hostname or ip to listen on to transfer data.
    transport.server_idle_timeout360000The max timeout(in ms) of server idle.
    transport.server_port0The server port to listen on to transfer data. The system will assign a random port if it’s set to 0.
    transport.server_threads4The number of transport threads for server.
    transport.sync_request_timeout10000The timeout(in ms) to wait response after sending sync-request.
    transport.tcp_keep_alivetrueWhether enable TCP keep-alive.
    transport.transport_epoll_ltfalseWhether enable EPOLL level-trigger.
    transport.write_buffer_high_mark67108864The high water mark for write buffer in bytes, it will trigger the sending unavailable if the number of queued bytes > write_buffer_high_mark.
    transport.write_buffer_low_mark33554432The low water mark for write buffer in bytes, it will trigger the sending available if the number of queued bytes < write_buffer_low_mark.org.apache.hugegraph.config.OptionChecker$$Lambda$97/0x00000008001c8440@776a6d9b
    transport.write_socket_timeout3000The timeout(in ms) to write data to socket buffer.
    valuefile.max_segment_size1073741824The max number of bytes in each segment of value-file.
    worker.combiner_classorg.apache.hugegraph.computer.core.config.NullCombiner can combine messages into one value for a vertex, for example page-rank algorithm can combine messages of a vertex to a sum value.
    worker.computation_classorg.apache.hugegraph.computer.core.config.NullThe class to create worker-computation object, worker-computation is used to compute each vertex in each superstep.
    worker.data_dirs[jobs]The directories separated by ‘,’ that received vertices and messages can persist into.
    worker.edge_properties_combiner_classorg.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombinerThe combiner can combine several properties of the same edge into one properties at inputstep.
    worker.partitionerorg.apache.hugegraph.computer.core.graph.partition.HashPartitionerThe partitioner that decides which partition a vertex should be in, and which worker a partition should be in.
    worker.received_buffers_bytes_limit104857600The limit bytes of buffers of received data, the total size of all buffers can’t excess this limit. If received buffers reach this limit, they will be merged into a file.
    worker.vertex_properties_combiner_classorg.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombinerThe combiner can combine several properties of the same vertex into one properties at inputstep.
    worker.wait_finish_messages_timeout86400000The max timeout(in ms) message-handler wait for finish-message of all workers.
    worker.wait_sort_timeout600000The max timeout(in ms) message-handler wait for sort-thread to sort one batch of buffers.
    worker.write_buffer_capacity52428800The initial size of write buffer that used to store vertex or message.
    worker.write_buffer_threshold52428800The threshold of write buffer, exceeding it will trigger sorting, the write buffer is used to store vertex or message.

    K8s Operator Config Options

    NOTE: Option needs to be converted through environment variable settings, e.g k8s.internal_etcd_url => INTERNAL_ETCD_URL

    config optiondefault valuedescription
    k8s.auto_destroy_podtrueWhether to automatically destroy all pods when the job is completed or failed.
    k8s.close_reconciler_timeout120The max timeout(in ms) to close reconciler.
    k8s.internal_etcd_urlhttp://127.0.0.1:2379The internal etcd url for operator system.
    k8s.max_reconcile_retry3The max retry times of reconcile.
    k8s.probe_backlog50The maximum backlog for serving health probes.
    k8s.probe_port9892The value is the port that the controller bind to for serving health probes.
    k8s.ready_check_internal1000The time interval(ms) of check ready.
    k8s.ready_timeout30000The max timeout(in ms) of check ready.
    k8s.reconciler_count10The max number of reconciler thread.
    k8s.resync_period600000The minimum frequency at which watched resources are reconciled.
    k8s.timezoneAsia/ShanghaiThe timezone of computer job and operator.
    k8s.watch_namespacehugegraph-computer-systemThe value is watch custom resources in the namespace, ignore other namespaces, the ‘*’ means is all namespaces will be watched.

    HugeGraph-Computer CRD

    CRD: https://github.com/apache/hugegraph-computer/blob/master/computer-k8s-operator/manifest/hugegraph-computer-crd.v1.yaml

    specdefault valuedescriptionrequired
    algorithmNameThe name of algorithm.true
    jobIdThe job id.true
    imageThe image of algorithm.true
    computerConfThe map of computer config options.true
    workerInstancesThe number of worker instances, it will instead the ‘job.workers_count’ option.true
    pullPolicyAlwaysThe pull-policy of image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policyfalse
    pullSecretsThe pull-secrets of Image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-podfalse
    masterCpuThe cpu limit of master, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpufalse
    workerCpuThe cpu limit of worker, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpufalse
    masterMemoryThe memory limit of master, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memoryfalse
    workerMemoryThe memory limit of worker, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memoryfalse
    log4jXmlThe content of log4j.xml for computer job.false
    jarFileThe jar path of computer algorithm.false
    remoteJarUriThe remote jar uri of computer algorithm, it will overlay algorithm image.false
    jvmOptionsThe java startup parameters of computer job.false
    envVarsplease refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/false
    envFromplease refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/false
    masterCommandbin/start-computer.shThe run command of master, equivalent to ‘Entrypoint’ field of Docker.false
    masterArgs["-r master", “-d k8s”]The run args of master, equivalent to ‘Cmd’ field of Docker.false
    workerCommandbin/start-computer.shThe run command of worker, equivalent to ‘Entrypoint’ field of Docker.false
    workerArgs["-r worker", “-d k8s”]The run args of worker, equivalent to ‘Cmd’ field of Docker.false
    volumesPlease refer to: https://kubernetes.io/docs/concepts/storage/volumes/false
    volumeMountsPlease refer to: https://kubernetes.io/docs/concepts/storage/volumes/false
    secretPathsThe map of k8s-secret name and mount path.false
    configMapPathsThe map of k8s-configmap name and mount path.false
    podTemplateSpecPlease refer to: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-template-v1/#PodTemplateSpecfalse
    securityContextPlease refer to: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/false

    KubeDriver Config Options

    config optiondefault valuedescription
    k8s.build_image_bash_pathThe path of command used to build image.
    k8s.enable_internal_algorithmtrueWhether enable internal algorithm.
    k8s.framework_image_urlhugegraph/hugegraph-computer:latestThe image url of computer framework.
    k8s.image_repository_passwordThe password for login image repository.
    k8s.image_repository_registryThe address for login image repository.
    k8s.image_repository_urlhugegraph/hugegraph-computerThe url of image repository.
    k8s.image_repository_usernameThe username for login image repository.
    k8s.internal_algorithm[pageRank]The name list of all internal algorithm.
    k8s.internal_algorithm_image_urlhugegraph/hugegraph-computer:latestThe image url of internal algorithm.
    k8s.jar_file_dir/cache/jars/The directory where the algorithm jar to upload location.
    k8s.kube_config~/.kube/configThe path of k8s config file.
    k8s.log4j_xml_pathThe log4j.xml path for computer job.
    k8s.namespacehugegraph-computer-systemThe namespace of hugegraph-computer system.
    k8s.pull_secret_names[]The names of pull-secret for pulling image.

    Last modified November 28, 2022: improve computer doc (#157) (862b048)
    + Print entire section

    HugeGraph-Computer Config

    Computer Config Options

    config optiondefault valuedescription
    algorithm.message_classorg.apache.hugegraph.computer.core.config.NullThe class of message passed when compute vertex.
    algorithm.params_classorg.apache.hugegraph.computer.core.config.NullThe class used to transfer algorithms’ parameters before algorithm been run.
    algorithm.result_classorg.apache.hugegraph.computer.core.config.NullThe class of vertex’s value, the instance is used to store computation result for the vertex.
    allocator.max_vertices_per_thread10000Maximum number of vertices per thread processed in each memory allocator
    bsp.etcd_endpointshttp://localhost:2379The end points to access etcd.
    bsp.log_interval30000The log interval(in ms) to print the log while waiting bsp event.
    bsp.max_super_step10The max super step of the algorithm.
    bsp.register_timeout300000The max timeout to wait for master and works to register.
    bsp.wait_master_timeout86400000The max timeout(in ms) to wait for master bsp event.
    bsp.wait_workers_timeout86400000The max timeout to wait for workers bsp event.
    hgkv.max_data_block_size65536The max byte size of hgkv-file data block.
    hgkv.max_file_size2147483648The max number of bytes in each hgkv-file.
    hgkv.max_merge_files10The max number of files to merge at one time.
    hgkv.temp_file_dir/tmp/hgkvThis folder is used to store temporary files, temporary files will be generated during the file merging process.
    hugegraph.namehugegraphThe graph name to load data and write results back.
    hugegraph.urlhttp://127.0.0.1:8080The hugegraph url to load data and write results back.
    input.edge_directionOUTThe data of the edge in which direction is loaded, when the value is BOTH, the edges in both OUT and IN direction will be loaded.
    input.edge_freqMULTIPLEThe frequency of edges can exist between a pair of vertices, allowed values: [SINGLE, SINGLE_PER_LABEL, MULTIPLE]. SINGLE means that only one edge can exist between a pair of vertices, use sourceId + targetId to identify it; SINGLE_PER_LABEL means that each edge label can exist one edge between a pair of vertices, use sourceId + edgelabel + targetId to identify it; MULTIPLE means that many edge can exist between a pair of vertices, use sourceId + edgelabel + sortValues + targetId to identify it.
    input.filter_classorg.apache.hugegraph.computer.core.input.filter.DefaultInputFilterThe class to create input-filter object, input-filter is used to Filter vertex edges according to user needs.
    input.loader_schema_pathThe schema path of loader input, only takes effect when the input.source_type=loader is enabled
    input.loader_struct_pathThe struct path of loader input, only takes effect when the input.source_type=loader is enabled
    input.max_edges_in_one_vertex200The maximum number of adjacent edges allowed to be attached to a vertex, the adjacent edges will be stored and transferred together as a batch unit.
    input.source_typehugegraph-serverThe source type to load input data, allowed values: [‘hugegraph-server’, ‘hugegraph-loader’], the ‘hugegraph-loader’ means use hugegraph-loader load data from HDFS or file, if use ‘hugegraph-loader’ load data then please config ‘input.loader_struct_path’ and ‘input.loader_schema_path’.
    input.split_fetch_timeout300The timeout in seconds to fetch input splits
    input.split_max_splits10000000The maximum number of input splits
    input.split_page_size500The page size for streamed load input split data
    input.split_size1048576The input split size in bytes
    job.idlocal_0001The job id on Yarn cluster or K8s cluster.
    job.partitions_count1The partitions count for computing one graph algorithm job.
    job.partitions_thread_nums4The number of threads for partition parallel compute.
    job.workers_count1The workers count for computing one graph algorithm job.
    master.computation_classorg.apache.hugegraph.computer.core.master.DefaultMasterComputationMaster-computation is computation that can determine whether to continue next superstep. It runs at the end of each superstep on master.
    output.batch_size500The batch size of output
    output.batch_threads1The threads number used to batch output
    output.hdfs_core_site_pathThe hdfs core site path.
    output.hdfs_delimiter,The delimiter of hdfs output.
    output.hdfs_kerberos_enablefalseIs Kerberos authentication enabled for Hdfs.
    output.hdfs_kerberos_keytabThe Hdfs’s key tab file for kerberos authentication.
    output.hdfs_kerberos_principalThe Hdfs’s principal for kerberos authentication.
    output.hdfs_krb5_conf/etc/krb5.confKerberos configuration file.
    output.hdfs_merge_partitionstrueWhether merge output files of multiple partitions.
    output.hdfs_path_prefix/hugegraph-computer/resultsThe directory of hdfs output result.
    output.hdfs_replication3The replication number of hdfs.
    output.hdfs_site_pathThe hdfs site path.
    output.hdfs_urlhdfs://127.0.0.1:9000The hdfs url of output.
    output.hdfs_userhadoopThe hdfs user of output.
    output.output_classorg.apache.hugegraph.computer.core.output.LogOutputThe class to output the computation result of each vertex. Be called after iteration computation.
    output.result_namevalueThe value is assigned dynamically by #name() of instance created by WORKER_COMPUTATION_CLASS.
    output.result_write_typeOLAP_COMMONThe result write-type to output to hugegraph, allowed values are: [OLAP_COMMON, OLAP_SECONDARY, OLAP_RANGE].
    output.retry_interval10The retry interval when output failed
    output.retry_times3The retry times when output failed
    output.single_threads1The threads number used to single output
    output.thread_pool_shutdown_timeout60The timeout seconds of output threads pool shutdown
    output.with_adjacent_edgesfalseOutput the adjacent edges of the vertex or not
    output.with_edge_propertiesfalseOutput the properties of the edge or not
    output.with_vertex_propertiesfalseOutput the properties of the vertex or not
    sort.thread_nums4The number of threads performing internal sorting.
    transport.client_connect_timeout3000The timeout(in ms) of client connect to server.
    transport.client_threads4The number of transport threads for client.
    transport.close_timeout10000The timeout(in ms) of close server or close client.
    transport.finish_session_timeout0The timeout(in ms) to finish session, 0 means using (transport.sync_request_timeout * transport.max_pending_requests).
    transport.heartbeat_interval20000The minimum interval(in ms) between heartbeats on client side.
    transport.io_modeAUTOThe network IO Mode, either ‘NIO’, ‘EPOLL’, ‘AUTO’, the ‘AUTO’ means selecting the property mode automatically.
    transport.max_pending_requests8The max number of client unreceived ack, it will trigger the sending unavailable if the number of unreceived ack >= max_pending_requests.
    transport.max_syn_backlog511The capacity of SYN queue on server side, 0 means using system default value.
    transport.max_timeout_heartbeat_count120The maximum times of timeout heartbeat on client side, if the number of timeouts waiting for heartbeat response continuously > max_heartbeat_timeouts the channel will be closed from client side.
    transport.min_ack_interval200The minimum interval(in ms) of server reply ack.
    transport.min_pending_requests6The minimum number of client unreceived ack, it will trigger the sending available if the number of unreceived ack < min_pending_requests.
    transport.network_retries3The number of retry attempts for network communication,if network unstable.
    transport.provider_classorg.apache.hugegraph.computer.core.network.netty.NettyTransportProviderThe transport provider, currently only supports Netty.
    transport.receive_buffer_size0The size of socket receive-buffer in bytes, 0 means using system default value.
    transport.recv_file_modetrueWhether enable receive buffer-file mode, it will receive buffer write file from socket by zero-copy if enable.
    transport.send_buffer_size0The size of socket send-buffer in bytes, 0 means using system default value.
    transport.server_host127.0.0.1The server hostname or ip to listen on to transfer data.
    transport.server_idle_timeout360000The max timeout(in ms) of server idle.
    transport.server_port0The server port to listen on to transfer data. The system will assign a random port if it’s set to 0.
    transport.server_threads4The number of transport threads for server.
    transport.sync_request_timeout10000The timeout(in ms) to wait response after sending sync-request.
    transport.tcp_keep_alivetrueWhether enable TCP keep-alive.
    transport.transport_epoll_ltfalseWhether enable EPOLL level-trigger.
    transport.write_buffer_high_mark67108864The high water mark for write buffer in bytes, it will trigger the sending unavailable if the number of queued bytes > write_buffer_high_mark.
    transport.write_buffer_low_mark33554432The low water mark for write buffer in bytes, it will trigger the sending available if the number of queued bytes < write_buffer_low_mark.org.apache.hugegraph.config.OptionChecker$$Lambda$97/0x00000008001c8440@776a6d9b
    transport.write_socket_timeout3000The timeout(in ms) to write data to socket buffer.
    valuefile.max_segment_size1073741824The max number of bytes in each segment of value-file.
    worker.combiner_classorg.apache.hugegraph.computer.core.config.NullCombiner can combine messages into one value for a vertex, for example page-rank algorithm can combine messages of a vertex to a sum value.
    worker.computation_classorg.apache.hugegraph.computer.core.config.NullThe class to create worker-computation object, worker-computation is used to compute each vertex in each superstep.
    worker.data_dirs[jobs]The directories separated by ‘,’ that received vertices and messages can persist into.
    worker.edge_properties_combiner_classorg.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombinerThe combiner can combine several properties of the same edge into one properties at inputstep.
    worker.partitionerorg.apache.hugegraph.computer.core.graph.partition.HashPartitionerThe partitioner that decides which partition a vertex should be in, and which worker a partition should be in.
    worker.received_buffers_bytes_limit104857600The limit bytes of buffers of received data, the total size of all buffers can’t excess this limit. If received buffers reach this limit, they will be merged into a file.
    worker.vertex_properties_combiner_classorg.apache.hugegraph.computer.core.combiner.OverwritePropertiesCombinerThe combiner can combine several properties of the same vertex into one properties at inputstep.
    worker.wait_finish_messages_timeout86400000The max timeout(in ms) message-handler wait for finish-message of all workers.
    worker.wait_sort_timeout600000The max timeout(in ms) message-handler wait for sort-thread to sort one batch of buffers.
    worker.write_buffer_capacity52428800The initial size of write buffer that used to store vertex or message.
    worker.write_buffer_threshold52428800The threshold of write buffer, exceeding it will trigger sorting, the write buffer is used to store vertex or message.

    K8s Operator Config Options

    NOTE: Option needs to be converted through environment variable settings, e.g k8s.internal_etcd_url => INTERNAL_ETCD_URL

    config optiondefault valuedescription
    k8s.auto_destroy_podtrueWhether to automatically destroy all pods when the job is completed or failed.
    k8s.close_reconciler_timeout120The max timeout(in ms) to close reconciler.
    k8s.internal_etcd_urlhttp://127.0.0.1:2379The internal etcd url for operator system.
    k8s.max_reconcile_retry3The max retry times of reconcile.
    k8s.probe_backlog50The maximum backlog for serving health probes.
    k8s.probe_port9892The value is the port that the controller bind to for serving health probes.
    k8s.ready_check_internal1000The time interval(ms) of check ready.
    k8s.ready_timeout30000The max timeout(in ms) of check ready.
    k8s.reconciler_count10The max number of reconciler thread.
    k8s.resync_period600000The minimum frequency at which watched resources are reconciled.
    k8s.timezoneAsia/ShanghaiThe timezone of computer job and operator.
    k8s.watch_namespacehugegraph-computer-systemThe value is watch custom resources in the namespace, ignore other namespaces, the ‘*’ means is all namespaces will be watched.

    HugeGraph-Computer CRD

    CRD: https://github.com/apache/hugegraph-computer/blob/master/computer-k8s-operator/manifest/hugegraph-computer-crd.v1.yaml

    specdefault valuedescriptionrequired
    algorithmNameThe name of algorithm.true
    jobIdThe job id.true
    imageThe image of algorithm.true
    computerConfThe map of computer config options.true
    workerInstancesThe number of worker instances, it will instead the ‘job.workers_count’ option.true
    pullPolicyAlwaysThe pull-policy of image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#image-pull-policyfalse
    pullSecretsThe pull-secrets of Image, detail please refer to: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-podfalse
    masterCpuThe cpu limit of master, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpufalse
    workerCpuThe cpu limit of worker, the unit can be ’m’ or without unit detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpufalse
    masterMemoryThe memory limit of master, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memoryfalse
    workerMemoryThe memory limit of worker, the unit can be one of Ei、Pi、Ti、Gi、Mi、Ki detail please refer to:https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-memoryfalse
    log4jXmlThe content of log4j.xml for computer job.false
    jarFileThe jar path of computer algorithm.false
    remoteJarUriThe remote jar uri of computer algorithm, it will overlay algorithm image.false
    jvmOptionsThe java startup parameters of computer job.false
    envVarsplease refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/false
    envFromplease refer to: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/false
    masterCommandbin/start-computer.shThe run command of master, equivalent to ‘Entrypoint’ field of Docker.false
    masterArgs["-r master", “-d k8s”]The run args of master, equivalent to ‘Cmd’ field of Docker.false
    workerCommandbin/start-computer.shThe run command of worker, equivalent to ‘Entrypoint’ field of Docker.false
    workerArgs["-r worker", “-d k8s”]The run args of worker, equivalent to ‘Cmd’ field of Docker.false
    volumesPlease refer to: https://kubernetes.io/docs/concepts/storage/volumes/false
    volumeMountsPlease refer to: https://kubernetes.io/docs/concepts/storage/volumes/false
    secretPathsThe map of k8s-secret name and mount path.false
    configMapPathsThe map of k8s-configmap name and mount path.false
    podTemplateSpecPlease refer to: https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-template-v1/#PodTemplateSpecfalse
    securityContextPlease refer to: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/false

    KubeDriver Config Options

    config optiondefault valuedescription
    k8s.build_image_bash_pathThe path of command used to build image.
    k8s.enable_internal_algorithmtrueWhether enable internal algorithm.
    k8s.framework_image_urlhugegraph/hugegraph-computer:latestThe image url of computer framework.
    k8s.image_repository_passwordThe password for login image repository.
    k8s.image_repository_registryThe address for login image repository.
    k8s.image_repository_urlhugegraph/hugegraph-computerThe url of image repository.
    k8s.image_repository_usernameThe username for login image repository.
    k8s.internal_algorithm[pageRank]The name list of all internal algorithm.
    k8s.internal_algorithm_image_urlhugegraph/hugegraph-computer:latestThe image url of internal algorithm.
    k8s.jar_file_dir/cache/jars/The directory where the algorithm jar to upload location.
    k8s.kube_config~/.kube/configThe path of k8s config file.
    k8s.log4j_xml_pathThe log4j.xml path for computer job.
    k8s.namespacehugegraph-computer-systemThe namespace of hugegraph-computer system.
    k8s.pull_secret_names[]The names of pull-secret for pulling image.

    Last modified November 28, 2022: improve computer doc (#157) (862b048)
    diff --git a/docs/config/config-guide/index.html b/docs/config/config-guide/index.html index 5a5c6643f..496a5747e 100644 --- a/docs/config/config-guide/index.html +++ b/docs/config/config-guide/index.html @@ -223,7 +223,7 @@

    停止 Server,初始化执行 init-store.sh(为新的图创建数据库),重新启动 Server

    $ bin/stop-hugegraph.sh
     $ bin/init-store.sh
     $ bin/start-hugegraph.sh
    -

    Last modified April 17, 2022: rebuild doc (ef36544)
    +
    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/config/config-https/index.html b/docs/config/config-https/index.html index 39c43cb2a..d3384738d 100644 --- a/docs/config/config-https/index.html +++ b/docs/config/config-https/index.html @@ -59,7 +59,7 @@ 国家代码:CN
    1. 根据服务端私钥,导出服务端证书
    keytool -export -alias serverkey -keystore server.keystore -file server.crt
     

    server.crt 就是服务端的证书

    客户端

    keytool -import -alias serverkey -file server.crt -keystore client.truststore
    -

    client.truststore 是给客户端⽤的,其中保存着受信任的证书


    Last modified April 17, 2022: rebuild doc (ef36544)
    +

    client.truststore 是给客户端⽤的,其中保存着受信任的证书


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/config/config-option/index.html b/docs/config/config-option/index.html index 2e672f6fc..d965639b7 100644 --- a/docs/config/config-option/index.html +++ b/docs/config/config-option/index.html @@ -21,7 +21,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph Config Options

    Gremlin Server Config Options

    Corresponding configuration file gremlin-server.yaml

    config optiondefault valuedescription
    host127.0.0.1The host or ip of Gremlin Server.
    port8182The listening port of Gremlin Server.
    graphshugegraph: conf/hugegraph.propertiesThe map of graphs with name and config file path.
    scriptEvaluationTimeout30000The timeout for gremlin script execution(millisecond).
    channelizerorg.apache.tinkerpop.gremlin.server.channel.HttpChannelizerIndicates the protocol which the Gremlin Server provides service.
    authenticationauthenticator: com.baidu.hugegraph.auth.StandardAuthenticator, config: {tokens: conf/rest-server.properties}The authenticator and config(contains tokens path) of authentication mechanism.

    Rest Server & API Config Options

    Corresponding configuration file rest-server.properties

    config optiondefault valuedescription
    graphs[hugegraph:conf/hugegraph.properties]The map of graphs’ name and config file.
    server.idserver-1The id of rest server, used for license verification.
    server.rolemasterThe role of nodes in the cluster, available types are [master, worker, computer]
    restserver.urlhttp://127.0.0.1:8080The url for listening of rest server.
    ssl.keystore_fileserver.keystoreThe path of server keystore file used when https protocol is enabled.
    ssl.keystore_passwordThe password of the path of the server keystore file used when the https protocol is enabled.
    restserver.max_worker_threads2 * CPUsThe maximum worker threads of rest server.
    restserver.min_free_memory64The minimum free memory(MB) of rest server, requests will be rejected when the available memory of system is lower than this value.
    restserver.request_timeout30The time in seconds within which a request must complete, -1 means no timeout.
    restserver.connection_idle_timeout30The time in seconds to keep an inactive connection alive, -1 means no timeout.
    restserver.connection_max_requests256The max number of HTTP requests allowed to be processed on one keep-alive connection, -1 means unlimited.
    gremlinserver.urlhttp://127.0.0.1:8182The url of gremlin server.
    gremlinserver.max_route8The max route number for gremlin server.
    gremlinserver.timeout30The timeout in seconds of waiting for gremlin server.
    batch.max_edges_per_batch500The maximum number of edges submitted per batch.
    batch.max_vertices_per_batch500The maximum number of vertices submitted per batch.
    batch.max_write_ratio50The maximum thread ratio for batch writing, only take effect if the batch.max_write_threads is 0.
    batch.max_write_threads0The maximum threads for batch writing, if the value is 0, the actual value will be set to batch.max_write_ratio * restserver.max_worker_threads.
    auth.authenticatorThe class path of authenticator implementation. e.g., com.baidu.hugegraph.auth.StandardAuthenticator, or com.baidu.hugegraph.auth.ConfigAuthenticator.
    auth.admin_token162f7848-0b6d-4faf-b557-3a0797869c55Token for administrator operations, only for com.baidu.hugegraph.auth.ConfigAuthenticator.
    auth.graph_storehugegraphThe name of graph used to store authentication information, like users, only for com.baidu.hugegraph.auth.StandardAuthenticator.
    auth.user_tokens[hugegraph:9fd95c9c-711b-415b-b85f-d4df46ba5c31]The map of user tokens with name and password, only for com.baidu.hugegraph.auth.ConfigAuthenticator.
    auth.audit_log_rate1000.0The max rate of audit log output per user, default value is 1000 records per second.
    auth.cache_capacity10240The max cache capacity of each auth cache item.
    auth.cache_expire600The expiration time in seconds of vertex cache.
    auth.remote_urlIf the address is empty, it provide auth service, otherwise it is auth client and also provide auth service through rpc forwarding. The remote url can be set to multiple addresses, which are concat by ‘,’.
    auth.token_expire86400The expiration time in seconds after token created
    auth.token_secretFXQXbJtbCLxODc6tGci732pkH1cyf8QgSecret key of HS256 algorithm.
    exception.allow_tracefalseWhether to allow exception trace stack.

    Basic Config Options

    Basic Config Options and Backend Config Options correspond to configuration files:{graph-name}.properties,such as hugegraph.properties

    config optiondefault valuedescription
    gremlin.graphcom.baidu.hugegraph.HugeFactoryGremlin entrance to create graph.
    backendrocksdbThe data store type, available values are [memory, rocksdb, cassandra, scylladb, hbase, mysql].
    serializerbinaryThe serializer for backend store, available values are [text, binary, cassandra, hbase, mysql].
    storehugegraphThe database name like Cassandra Keyspace.
    store.connection_detect_interval600The interval in seconds for detecting connections, if the idle time of a connection exceeds this value, detect it and reconnect if needed before using, value 0 means detecting every time.
    store.graphgThe graph table name, which store vertex, edge and property.
    store.schemamThe schema table name, which store meta data.
    store.systemsThe system table name, which store system data.
    schema.illegal_name_regex.\s+$|~.The regex specified the illegal format for schema name.
    schema.cache_capacity10000The max cache size(items) of schema cache.
    vertex.cache_typel2The type of vertex cache, allowed values are [l1, l2].
    vertex.cache_capacity10000000The max cache size(items) of vertex cache.
    vertex.cache_expire600The expire time in seconds of vertex cache.
    vertex.check_customized_id_existfalseWhether to check the vertices exist for those using customized id strategy.
    vertex.default_labelvertexThe default vertex label.
    vertex.tx_capacity10000The max size(items) of vertices(uncommitted) in transaction.
    vertex.check_adjacent_vertex_existfalseWhether to check the adjacent vertices of edges exist.
    vertex.lazy_load_adjacent_vertextrueWhether to lazy load adjacent vertices of edges.
    vertex.part_edge_commit_size5000Whether to enable the mode to commit part of edges of vertex, enabled if commit size > 0, 0 means disabled.
    vertex.encode_primary_key_numbertrueWhether to encode number value of primary key in vertex id.
    vertex.remove_left_index_at_overwritefalseWhether remove left index at overwrite.
    edge.cache_typel2The type of edge cache, allowed values are [l1, l2].
    edge.cache_capacity1000000The max cache size(items) of edge cache.
    edge.cache_expire600The expiration time in seconds of edge cache.
    edge.tx_capacity10000The max size(items) of edges(uncommitted) in transaction.
    query.page_size500The size of each page when querying by paging.
    query.batch_size1000The size of each batch when querying by batch.
    query.ignore_invalid_datatrueWhether to ignore invalid data of vertex or edge.
    query.index_intersect_threshold1000The maximum number of intermediate results to intersect indexes when querying by multiple single index properties.
    query.ramtable_edges_capacity20000000The maximum number of edges in ramtable, include OUT and IN edges.
    query.ramtable_enablefalseWhether to enable ramtable for query of adjacent edges.
    query.ramtable_vertices_capacity10000000The maximum number of vertices in ramtable, generally the largest vertex id is used as capacity.
    query.optimize_aggregate_by_indexfalseWhether to optimize aggregate query(like count) by index.
    oltp.concurrent_depth10The min depth to enable concurrent oltp algorithm.
    oltp.concurrent_threads10Thread number to concurrently execute oltp algorithm.
    oltp.collection_typeECThe implementation type of collections used in oltp algorithm.
    rate_limit.read0The max rate(times/s) to execute query of vertices/edges.
    rate_limit.write0The max rate(items/s) to add/update/delete vertices/edges.
    task.wait_timeout10Timeout in seconds for waiting for the task to complete,such as when truncating or clearing the backend.
    task.input_size_limit16777216The job input size limit in bytes.
    task.result_size_limit16777216The job result size limit in bytes.
    task.sync_deletionfalseWhether to delete schema or expired data synchronously.
    task.ttl_delete_batch1The batch size used to delete expired data.
    computer.config/conf/computer.yamlThe config file path of computer job.
    search.text_analyzerikanalyzerChoose a text analyzer for searching the vertex/edge properties, available type are [word, ansj, hanlp, smartcn, jieba, jcseg, mmseg4j, ikanalyzer].
    search.text_analyzer_modesmartSpecify the mode for the text analyzer, the available mode of analyzer are {word: [MaximumMatching, ReverseMaximumMatching, MinimumMatching, ReverseMinimumMatching, BidirectionalMaximumMatching, BidirectionalMinimumMatching, BidirectionalMaximumMinimumMatching, FullSegmentation, MinimalWordCount, MaxNgramScore, PureEnglish], ansj: [BaseAnalysis, IndexAnalysis, ToAnalysis, NlpAnalysis], hanlp: [standard, nlp, index, nShort, shortest, speed], smartcn: [], jieba: [SEARCH, INDEX], jcseg: [Simple, Complex], mmseg4j: [Simple, Complex, MaxWord], ikanalyzer: [smart, max_word]}.
    snowflake.datacenter_id0The datacenter id of snowflake id generator.
    snowflake.force_stringfalseWhether to force the snowflake long id to be a string.
    snowflake.worker_id0The worker id of snowflake id generator.
    raft.modefalseWhether the backend storage works in raft mode.
    raft.safe_readfalseWhether to use linearly consistent read.
    raft.use_snapshotfalseWhether to use snapshot.
    raft.endpoint127.0.0.1:8281The peerid of current raft node.
    raft.group_peers127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283The peers of current raft group.
    raft.path./raft-logThe log path of current raft node.
    raft.use_replicator_pipelinetrueWhether to use replicator line, when turned on it multiple logs can be sent in parallel, and the next log doesn’t have to wait for the ack message of the current log to be sent.
    raft.election_timeout10000Timeout in milliseconds to launch a round of election.
    raft.snapshot_interval3600The interval in seconds to trigger snapshot save.
    raft.backend_threadscurrent CPU v-coresThe thread number used to apply task to backend.
    raft.read_index_threads8The thread number used to execute reading index.
    raft.apply_batch1The apply batch size to trigger disruptor event handler.
    raft.queue_size16384The disruptor buffers size for jraft RaftNode, StateMachine and LogManager.
    raft.queue_publish_timeout60The timeout in second when publish event into disruptor.
    raft.rpc_threads80The rpc threads for jraft RPC layer.
    raft.rpc_connect_timeout5000The rpc connect timeout for jraft rpc.
    raft.rpc_timeout60000The rpc timeout for jraft rpc.
    raft.rpc_buf_low_water_mark10485760The ChannelOutboundBuffer’s low water mark of netty, when buffer size less than this size, the method ChannelOutboundBuffer.isWritable() will return true, it means that low downstream pressure or good network.
    raft.rpc_buf_high_water_mark20971520The ChannelOutboundBuffer’s high water mark of netty, only when buffer size exceed this size, the method ChannelOutboundBuffer.isWritable() will return false, it means that the downstream pressure is too great to process the request or network is very congestion, upstream needs to limit rate at this time.
    raft.read_strategyReadOnlyLeaseBasedThe linearizability of read strategy.

    RPC server Config Options

    config optiondefault valuedescription
    rpc.client_connect_timeout20The timeout(in seconds) of rpc client connect to rpc server.
    rpc.client_load_balancerconsistentHashThe rpc client uses a load-balancing algorithm to access multiple rpc servers in one cluster. Default value is ‘consistentHash’, means forwarding by request parameters.
    rpc.client_read_timeout40The timeout(in seconds) of rpc client read from rpc server.
    rpc.client_reconnect_period10The period(in seconds) of rpc client reconnect to rpc server.
    rpc.client_retries3Failed retry number of rpc client calls to rpc server.
    rpc.config_order999Sofa rpc configuration file loading order, the larger the more later loading.
    rpc.logger_implcom.alipay.sofa.rpc.log.SLF4JLoggerImplSofa rpc log implementation class.
    rpc.protocolboltRpc communication protocol, client and server need to be specified the same value.
    rpc.remote_urlThe remote urls of rpc peers, it can be set to multiple addresses, which are concat by ‘,’, empty value means not enabled.
    rpc.server_adaptive_portfalseWhether the bound port is adaptive, if it’s enabled, when the port is in use, automatically +1 to detect the next available port. Note that this process is not atomic, so there may still be port conflicts.
    rpc.server_hostThe hosts/ips bound by rpc server to provide services, empty value means not enabled.
    rpc.server_port8090The port bound by rpc server to provide services.
    rpc.server_timeout30The timeout(in seconds) of rpc server execution.

    Cassandra Backend Config Options

    config optiondefault valuedescription
    backendMust be set to cassandra.
    serializerMust be set to cassandra.
    cassandra.hostlocalhostThe seeds hostname or ip address of cassandra cluster.
    cassandra.port9042The seeds port address of cassandra cluster.
    cassandra.connect_timeout5The cassandra driver connect server timeout(seconds).
    cassandra.read_timeout20The cassandra driver read from server timeout(seconds).
    cassandra.keyspace.strategySimpleStrategyThe replication strategy of keyspace, valid value is SimpleStrategy or NetworkTopologyStrategy.
    cassandra.keyspace.replication[3]The keyspace replication factor of SimpleStrategy, like ‘[3]’.Or replicas in each datacenter of NetworkTopologyStrategy, like ‘[dc1:2,dc2:1]’.
    cassandra.usernameThe username to use to login to cassandra cluster.
    cassandra.passwordThe password corresponding to cassandra.username.
    cassandra.compression_typenoneThe compression algorithm of cassandra transport: none/snappy/lz4.
    cassandra.jmx_port=71997199The port of JMX API service for cassandra.
    cassandra.aggregation_timeout43200The timeout in seconds of waiting for aggregation.

    ScyllaDB Backend Config Options

    config optiondefault valuedescription
    backendMust be set to scylladb.
    serializerMust be set to scylladb.

    Other options are consistent with the Cassandra backend.

    RocksDB Backend Config Options

    config optiondefault valuedescription
    backendMust be set to rocksdb.
    serializerMust be set to binary.
    rocksdb.data_disks[]The optimized disks for storing data of RocksDB. The format of each element: STORE/TABLE: /path/disk.Allowed keys are [g/vertex, g/edge_out, g/edge_in, g/vertex_label_index, g/edge_label_index, g/range_int_index, g/range_float_index, g/range_long_index, g/range_double_index, g/secondary_index, g/search_index, g/shard_index, g/unique_index, g/olap]
    rocksdb.data_pathrocksdb-dataThe path for storing data of RocksDB.
    rocksdb.wal_pathrocksdb-dataThe path for storing WAL of RocksDB.
    rocksdb.allow_mmap_readsfalseAllow the OS to mmap file for reading sst tables.
    rocksdb.allow_mmap_writesfalseAllow the OS to mmap file for writing.
    rocksdb.block_cache_capacity8388608The amount of block cache in bytes that will be used by RocksDB, 0 means no block cache.
    rocksdb.bloom_filter_bits_per_key-1The bits per key in bloom filter, a good value is 10, which yields a filter with ~ 1% false positive rate, -1 means no bloom filter.
    rocksdb.bloom_filter_block_based_modefalseUse block based filter rather than full filter.
    rocksdb.bloom_filter_whole_key_filteringtrueTrue if place whole keys in the bloom filter, else place the prefix of keys.
    rocksdb.bottommost_compressionNO_COMPRESSIONThe compression algorithm for the bottommost level of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
    rocksdb.bulkload_modefalseSwitch to the mode to bulk load data into RocksDB.
    rocksdb.cache_index_and_filter_blocksfalseIndicating if we’d put index/filter blocks to the block cache.
    rocksdb.compaction_styleLEVELSet compaction style for RocksDB: LEVEL/UNIVERSAL/FIFO.
    rocksdb.compressionSNAPPY_COMPRESSIONThe compression algorithm for compressing blocks of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
    rocksdb.compression_per_level[NO_COMPRESSION, NO_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION]The compression algorithms for different levels of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
    rocksdb.delayed_write_rate16777216The rate limit in bytes/s of user write requests when need to slow down if the compaction gets behind.
    rocksdb.log_levelINFOThe info log level of RocksDB.
    rocksdb.max_background_jobs8Maximum number of concurrent background jobs, including flushes and compactions.
    rocksdb.level_compaction_dynamic_level_bytesfalseWhether to enable level_compaction_dynamic_level_bytes, if it’s enabled we give max_bytes_for_level_multiplier a priority against max_bytes_for_level_base, the bytes of base level is dynamic for a more predictable LSM tree, it is useful to limit worse case space amplification. Turning this feature on/off for an existing DB can cause unexpected LSM tree structure so it’s not recommended.
    rocksdb.max_bytes_for_level_base536870912The upper-bound of the total size of level-1 files in bytes.
    rocksdb.max_bytes_for_level_multiplier10.0The ratio between the total size of level (L+1) files and the total size of level L files for all L.
    rocksdb.max_open_files-1The maximum number of open files that can be cached by RocksDB, -1 means no limit.
    rocksdb.max_subcompactions4The value represents the maximum number of threads per compaction job.
    rocksdb.max_write_buffer_number6The maximum number of write buffers that are built up in memory.
    rocksdb.max_write_buffer_number_to_maintain0The total maximum number of write buffers to maintain in memory.
    rocksdb.min_write_buffer_number_to_merge2The minimum number of write buffers that will be merged together.
    rocksdb.num_levels7Set the number of levels for this database.
    rocksdb.optimize_filters_for_hitsfalseThis flag allows us to not store filters for the last level.
    rocksdb.optimize_modetrueOptimize for heavy workloads and big datasets.
    rocksdb.pin_l0_filter_and_index_blocks_in_cachefalseIndicating if we’d put index/filter blocks to the block cache.
    rocksdb.sst_pathThe path for ingesting SST file into RocksDB.
    rocksdb.target_file_size_base67108864The target file size for compaction in bytes.
    rocksdb.target_file_size_multiplier1The size ratio between a level L file and a level (L+1) file.
    rocksdb.use_direct_io_for_flush_and_compactionfalseEnable the OS to use direct read/writes in flush and compaction.
    rocksdb.use_direct_readsfalseEnable the OS to use direct I/O for reading sst tables.
    rocksdb.write_buffer_size134217728Amount of data in bytes to build up in memory.
    rocksdb.max_manifest_file_size104857600The max size of manifest file in bytes.
    rocksdb.skip_stats_update_on_db_openfalseWhether to skip statistics update when opening the database, setting this flag true allows us to not update statistics.
    rocksdb.max_file_opening_threads16The max number of threads used to open files.
    rocksdb.max_total_wal_size0Total size of WAL files in bytes. Once WALs exceed this size, we will start forcing the flush of column families related, 0 means no limit.
    rocksdb.db_write_buffer_size0Total size of write buffers in bytes across all column families, 0 means no limit.
    rocksdb.delete_obsolete_files_period21600The periodicity in seconds when obsolete files get deleted, 0 means always do full purge.
    rocksdb.hard_pending_compaction_bytes_limit274877906944The hard limit to impose on pending compaction in bytes.
    rocksdb.level0_file_num_compaction_trigger2Number of files to trigger level-0 compaction.
    rocksdb.level0_slowdown_writes_trigger20Soft limit on number of level-0 files for slowing down writes.
    rocksdb.level0_stop_writes_trigger36Hard limit on number of level-0 files for stopping writes.
    rocksdb.soft_pending_compaction_bytes_limit68719476736The soft limit to impose on pending compaction in bytes.

    HBase Backend Config Options

    config optiondefault valuedescription
    backendMust be set to hbase.
    serializerMust be set to hbase.
    hbase.hostslocalhostThe hostnames or ip addresses of HBase zookeeper, separated with commas.
    hbase.port2181The port address of HBase zookeeper.
    hbase.threads_max64The max threads num of hbase connections.
    hbase.znode_parent/hbaseThe znode parent path of HBase zookeeper.
    hbase.zk_retry3The recovery retry times of HBase zookeeper.
    hbase.aggregation_timeout43200The timeout in seconds of waiting for aggregation.
    hbase.kerberos_enablefalseIs Kerberos authentication enabled for HBase.
    hbase.kerberos_keytabThe HBase’s key tab file for kerberos authentication.
    hbase.kerberos_principalThe HBase’s principal for kerberos authentication.
    hbase.krb5_confetc/krb5.confKerberos configuration file, including KDC IP, default realm, etc.
    hbase.hbase_site/etc/hbase/conf/hbase-site.xmlThe HBase’s configuration file
    hbase.enable_partitiontrueIs pre-split partitions enabled for HBase.
    hbase.vertex_partitions10The number of partitions of the HBase vertex table.
    hbase.edge_partitions30The number of partitions of the HBase edge table.

    MySQL & PostgreSQL Backend Config Options

    config optiondefault valuedescription
    backendMust be set to mysql.
    serializerMust be set to mysql.
    jdbc.drivercom.mysql.jdbc.DriverThe JDBC driver class to connect database.
    jdbc.urljdbc:mysql://127.0.0.1:3306The url of database in JDBC format.
    jdbc.usernamerootThe username to login database.
    jdbc.password******The password corresponding to jdbc.username.
    jdbc.ssl_modefalseThe SSL mode of connections with database.
    jdbc.reconnect_interval3The interval(seconds) between reconnections when the database connection fails.
    jdbc.reconnect_max_times3The reconnect times when the database connection fails.
    jdbc.storage_engineInnoDBThe storage engine of backend store database, like InnoDB/MyISAM/RocksDB for MySQL.
    jdbc.postgresql.connect_databasetemplate1The database used to connect when init store, drop store or check store exist.

    PostgreSQL Backend Config Options

    config optiondefault valuedescription
    backendMust be set to postgresql.
    serializerMust be set to postgresql.

    Other options are consistent with the MySQL backend.

    The driver and url of the PostgreSQL backend should be set to:

    • jdbc.driver=org.postgresql.Driver
    • jdbc.url=jdbc:postgresql://localhost:5432/

    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    + Print entire section

    HugeGraph Config Options

    Gremlin Server Config Options

    Corresponding configuration file gremlin-server.yaml

    config optiondefault valuedescription
    host127.0.0.1The host or ip of Gremlin Server.
    port8182The listening port of Gremlin Server.
    graphshugegraph: conf/hugegraph.propertiesThe map of graphs with name and config file path.
    scriptEvaluationTimeout30000The timeout for gremlin script execution(millisecond).
    channelizerorg.apache.tinkerpop.gremlin.server.channel.HttpChannelizerIndicates the protocol which the Gremlin Server provides service.
    authenticationauthenticator: com.baidu.hugegraph.auth.StandardAuthenticator, config: {tokens: conf/rest-server.properties}The authenticator and config(contains tokens path) of authentication mechanism.

    Rest Server & API Config Options

    Corresponding configuration file rest-server.properties

    config optiondefault valuedescription
    graphs[hugegraph:conf/hugegraph.properties]The map of graphs’ name and config file.
    server.idserver-1The id of rest server, used for license verification.
    server.rolemasterThe role of nodes in the cluster, available types are [master, worker, computer]
    restserver.urlhttp://127.0.0.1:8080The url for listening of rest server.
    ssl.keystore_fileserver.keystoreThe path of server keystore file used when https protocol is enabled.
    ssl.keystore_passwordThe password of the path of the server keystore file used when the https protocol is enabled.
    restserver.max_worker_threads2 * CPUsThe maximum worker threads of rest server.
    restserver.min_free_memory64The minimum free memory(MB) of rest server, requests will be rejected when the available memory of system is lower than this value.
    restserver.request_timeout30The time in seconds within which a request must complete, -1 means no timeout.
    restserver.connection_idle_timeout30The time in seconds to keep an inactive connection alive, -1 means no timeout.
    restserver.connection_max_requests256The max number of HTTP requests allowed to be processed on one keep-alive connection, -1 means unlimited.
    gremlinserver.urlhttp://127.0.0.1:8182The url of gremlin server.
    gremlinserver.max_route8The max route number for gremlin server.
    gremlinserver.timeout30The timeout in seconds of waiting for gremlin server.
    batch.max_edges_per_batch500The maximum number of edges submitted per batch.
    batch.max_vertices_per_batch500The maximum number of vertices submitted per batch.
    batch.max_write_ratio50The maximum thread ratio for batch writing, only take effect if the batch.max_write_threads is 0.
    batch.max_write_threads0The maximum threads for batch writing, if the value is 0, the actual value will be set to batch.max_write_ratio * restserver.max_worker_threads.
    auth.authenticatorThe class path of authenticator implementation. e.g., com.baidu.hugegraph.auth.StandardAuthenticator, or com.baidu.hugegraph.auth.ConfigAuthenticator.
    auth.admin_token162f7848-0b6d-4faf-b557-3a0797869c55Token for administrator operations, only for com.baidu.hugegraph.auth.ConfigAuthenticator.
    auth.graph_storehugegraphThe name of graph used to store authentication information, like users, only for com.baidu.hugegraph.auth.StandardAuthenticator.
    auth.user_tokens[hugegraph:9fd95c9c-711b-415b-b85f-d4df46ba5c31]The map of user tokens with name and password, only for com.baidu.hugegraph.auth.ConfigAuthenticator.
    auth.audit_log_rate1000.0The max rate of audit log output per user, default value is 1000 records per second.
    auth.cache_capacity10240The max cache capacity of each auth cache item.
    auth.cache_expire600The expiration time in seconds of vertex cache.
    auth.remote_urlIf the address is empty, it provide auth service, otherwise it is auth client and also provide auth service through rpc forwarding. The remote url can be set to multiple addresses, which are concat by ‘,’.
    auth.token_expire86400The expiration time in seconds after token created
    auth.token_secretFXQXbJtbCLxODc6tGci732pkH1cyf8QgSecret key of HS256 algorithm.
    exception.allow_tracefalseWhether to allow exception trace stack.

    Basic Config Options

    Basic Config Options and Backend Config Options correspond to configuration files:{graph-name}.properties,such as hugegraph.properties

    config optiondefault valuedescription
    gremlin.graphcom.baidu.hugegraph.HugeFactoryGremlin entrance to create graph.
    backendrocksdbThe data store type, available values are [memory, rocksdb, cassandra, scylladb, hbase, mysql].
    serializerbinaryThe serializer for backend store, available values are [text, binary, cassandra, hbase, mysql].
    storehugegraphThe database name like Cassandra Keyspace.
    store.connection_detect_interval600The interval in seconds for detecting connections, if the idle time of a connection exceeds this value, detect it and reconnect if needed before using, value 0 means detecting every time.
    store.graphgThe graph table name, which store vertex, edge and property.
    store.schemamThe schema table name, which store meta data.
    store.systemsThe system table name, which store system data.
    schema.illegal_name_regex.\s+$|~.The regex specified the illegal format for schema name.
    schema.cache_capacity10000The max cache size(items) of schema cache.
    vertex.cache_typel2The type of vertex cache, allowed values are [l1, l2].
    vertex.cache_capacity10000000The max cache size(items) of vertex cache.
    vertex.cache_expire600The expire time in seconds of vertex cache.
    vertex.check_customized_id_existfalseWhether to check the vertices exist for those using customized id strategy.
    vertex.default_labelvertexThe default vertex label.
    vertex.tx_capacity10000The max size(items) of vertices(uncommitted) in transaction.
    vertex.check_adjacent_vertex_existfalseWhether to check the adjacent vertices of edges exist.
    vertex.lazy_load_adjacent_vertextrueWhether to lazy load adjacent vertices of edges.
    vertex.part_edge_commit_size5000Whether to enable the mode to commit part of edges of vertex, enabled if commit size > 0, 0 means disabled.
    vertex.encode_primary_key_numbertrueWhether to encode number value of primary key in vertex id.
    vertex.remove_left_index_at_overwritefalseWhether remove left index at overwrite.
    edge.cache_typel2The type of edge cache, allowed values are [l1, l2].
    edge.cache_capacity1000000The max cache size(items) of edge cache.
    edge.cache_expire600The expiration time in seconds of edge cache.
    edge.tx_capacity10000The max size(items) of edges(uncommitted) in transaction.
    query.page_size500The size of each page when querying by paging.
    query.batch_size1000The size of each batch when querying by batch.
    query.ignore_invalid_datatrueWhether to ignore invalid data of vertex or edge.
    query.index_intersect_threshold1000The maximum number of intermediate results to intersect indexes when querying by multiple single index properties.
    query.ramtable_edges_capacity20000000The maximum number of edges in ramtable, include OUT and IN edges.
    query.ramtable_enablefalseWhether to enable ramtable for query of adjacent edges.
    query.ramtable_vertices_capacity10000000The maximum number of vertices in ramtable, generally the largest vertex id is used as capacity.
    query.optimize_aggregate_by_indexfalseWhether to optimize aggregate query(like count) by index.
    oltp.concurrent_depth10The min depth to enable concurrent oltp algorithm.
    oltp.concurrent_threads10Thread number to concurrently execute oltp algorithm.
    oltp.collection_typeECThe implementation type of collections used in oltp algorithm.
    rate_limit.read0The max rate(times/s) to execute query of vertices/edges.
    rate_limit.write0The max rate(items/s) to add/update/delete vertices/edges.
    task.wait_timeout10Timeout in seconds for waiting for the task to complete,such as when truncating or clearing the backend.
    task.input_size_limit16777216The job input size limit in bytes.
    task.result_size_limit16777216The job result size limit in bytes.
    task.sync_deletionfalseWhether to delete schema or expired data synchronously.
    task.ttl_delete_batch1The batch size used to delete expired data.
    computer.config/conf/computer.yamlThe config file path of computer job.
    search.text_analyzerikanalyzerChoose a text analyzer for searching the vertex/edge properties, available type are [word, ansj, hanlp, smartcn, jieba, jcseg, mmseg4j, ikanalyzer].
    search.text_analyzer_modesmartSpecify the mode for the text analyzer, the available mode of analyzer are {word: [MaximumMatching, ReverseMaximumMatching, MinimumMatching, ReverseMinimumMatching, BidirectionalMaximumMatching, BidirectionalMinimumMatching, BidirectionalMaximumMinimumMatching, FullSegmentation, MinimalWordCount, MaxNgramScore, PureEnglish], ansj: [BaseAnalysis, IndexAnalysis, ToAnalysis, NlpAnalysis], hanlp: [standard, nlp, index, nShort, shortest, speed], smartcn: [], jieba: [SEARCH, INDEX], jcseg: [Simple, Complex], mmseg4j: [Simple, Complex, MaxWord], ikanalyzer: [smart, max_word]}.
    snowflake.datacenter_id0The datacenter id of snowflake id generator.
    snowflake.force_stringfalseWhether to force the snowflake long id to be a string.
    snowflake.worker_id0The worker id of snowflake id generator.
    raft.modefalseWhether the backend storage works in raft mode.
    raft.safe_readfalseWhether to use linearly consistent read.
    raft.use_snapshotfalseWhether to use snapshot.
    raft.endpoint127.0.0.1:8281The peerid of current raft node.
    raft.group_peers127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283The peers of current raft group.
    raft.path./raft-logThe log path of current raft node.
    raft.use_replicator_pipelinetrueWhether to use replicator line, when turned on it multiple logs can be sent in parallel, and the next log doesn’t have to wait for the ack message of the current log to be sent.
    raft.election_timeout10000Timeout in milliseconds to launch a round of election.
    raft.snapshot_interval3600The interval in seconds to trigger snapshot save.
    raft.backend_threadscurrent CPU v-coresThe thread number used to apply task to backend.
    raft.read_index_threads8The thread number used to execute reading index.
    raft.apply_batch1The apply batch size to trigger disruptor event handler.
    raft.queue_size16384The disruptor buffers size for jraft RaftNode, StateMachine and LogManager.
    raft.queue_publish_timeout60The timeout in second when publish event into disruptor.
    raft.rpc_threads80The rpc threads for jraft RPC layer.
    raft.rpc_connect_timeout5000The rpc connect timeout for jraft rpc.
    raft.rpc_timeout60000The rpc timeout for jraft rpc.
    raft.rpc_buf_low_water_mark10485760The ChannelOutboundBuffer’s low water mark of netty, when buffer size less than this size, the method ChannelOutboundBuffer.isWritable() will return true, it means that low downstream pressure or good network.
    raft.rpc_buf_high_water_mark20971520The ChannelOutboundBuffer’s high water mark of netty, only when buffer size exceed this size, the method ChannelOutboundBuffer.isWritable() will return false, it means that the downstream pressure is too great to process the request or network is very congestion, upstream needs to limit rate at this time.
    raft.read_strategyReadOnlyLeaseBasedThe linearizability of read strategy.

    RPC server Config Options

    config optiondefault valuedescription
    rpc.client_connect_timeout20The timeout(in seconds) of rpc client connect to rpc server.
    rpc.client_load_balancerconsistentHashThe rpc client uses a load-balancing algorithm to access multiple rpc servers in one cluster. Default value is ‘consistentHash’, means forwarding by request parameters.
    rpc.client_read_timeout40The timeout(in seconds) of rpc client read from rpc server.
    rpc.client_reconnect_period10The period(in seconds) of rpc client reconnect to rpc server.
    rpc.client_retries3Failed retry number of rpc client calls to rpc server.
    rpc.config_order999Sofa rpc configuration file loading order, the larger the more later loading.
    rpc.logger_implcom.alipay.sofa.rpc.log.SLF4JLoggerImplSofa rpc log implementation class.
    rpc.protocolboltRpc communication protocol, client and server need to be specified the same value.
    rpc.remote_urlThe remote urls of rpc peers, it can be set to multiple addresses, which are concat by ‘,’, empty value means not enabled.
    rpc.server_adaptive_portfalseWhether the bound port is adaptive, if it’s enabled, when the port is in use, automatically +1 to detect the next available port. Note that this process is not atomic, so there may still be port conflicts.
    rpc.server_hostThe hosts/ips bound by rpc server to provide services, empty value means not enabled.
    rpc.server_port8090The port bound by rpc server to provide services.
    rpc.server_timeout30The timeout(in seconds) of rpc server execution.

    Cassandra Backend Config Options

    config optiondefault valuedescription
    backendMust be set to cassandra.
    serializerMust be set to cassandra.
    cassandra.hostlocalhostThe seeds hostname or ip address of cassandra cluster.
    cassandra.port9042The seeds port address of cassandra cluster.
    cassandra.connect_timeout5The cassandra driver connect server timeout(seconds).
    cassandra.read_timeout20The cassandra driver read from server timeout(seconds).
    cassandra.keyspace.strategySimpleStrategyThe replication strategy of keyspace, valid value is SimpleStrategy or NetworkTopologyStrategy.
    cassandra.keyspace.replication[3]The keyspace replication factor of SimpleStrategy, like ‘[3]’.Or replicas in each datacenter of NetworkTopologyStrategy, like ‘[dc1:2,dc2:1]’.
    cassandra.usernameThe username to use to login to cassandra cluster.
    cassandra.passwordThe password corresponding to cassandra.username.
    cassandra.compression_typenoneThe compression algorithm of cassandra transport: none/snappy/lz4.
    cassandra.jmx_port=71997199The port of JMX API service for cassandra.
    cassandra.aggregation_timeout43200The timeout in seconds of waiting for aggregation.

    ScyllaDB Backend Config Options

    config optiondefault valuedescription
    backendMust be set to scylladb.
    serializerMust be set to scylladb.

    Other options are consistent with the Cassandra backend.

    RocksDB Backend Config Options

    config optiondefault valuedescription
    backendMust be set to rocksdb.
    serializerMust be set to binary.
    rocksdb.data_disks[]The optimized disks for storing data of RocksDB. The format of each element: STORE/TABLE: /path/disk.Allowed keys are [g/vertex, g/edge_out, g/edge_in, g/vertex_label_index, g/edge_label_index, g/range_int_index, g/range_float_index, g/range_long_index, g/range_double_index, g/secondary_index, g/search_index, g/shard_index, g/unique_index, g/olap]
    rocksdb.data_pathrocksdb-dataThe path for storing data of RocksDB.
    rocksdb.wal_pathrocksdb-dataThe path for storing WAL of RocksDB.
    rocksdb.allow_mmap_readsfalseAllow the OS to mmap file for reading sst tables.
    rocksdb.allow_mmap_writesfalseAllow the OS to mmap file for writing.
    rocksdb.block_cache_capacity8388608The amount of block cache in bytes that will be used by RocksDB, 0 means no block cache.
    rocksdb.bloom_filter_bits_per_key-1The bits per key in bloom filter, a good value is 10, which yields a filter with ~ 1% false positive rate, -1 means no bloom filter.
    rocksdb.bloom_filter_block_based_modefalseUse block based filter rather than full filter.
    rocksdb.bloom_filter_whole_key_filteringtrueTrue if place whole keys in the bloom filter, else place the prefix of keys.
    rocksdb.bottommost_compressionNO_COMPRESSIONThe compression algorithm for the bottommost level of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
    rocksdb.bulkload_modefalseSwitch to the mode to bulk load data into RocksDB.
    rocksdb.cache_index_and_filter_blocksfalseIndicating if we’d put index/filter blocks to the block cache.
    rocksdb.compaction_styleLEVELSet compaction style for RocksDB: LEVEL/UNIVERSAL/FIFO.
    rocksdb.compressionSNAPPY_COMPRESSIONThe compression algorithm for compressing blocks of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
    rocksdb.compression_per_level[NO_COMPRESSION, NO_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION]The compression algorithms for different levels of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
    rocksdb.delayed_write_rate16777216The rate limit in bytes/s of user write requests when need to slow down if the compaction gets behind.
    rocksdb.log_levelINFOThe info log level of RocksDB.
    rocksdb.max_background_jobs8Maximum number of concurrent background jobs, including flushes and compactions.
    rocksdb.level_compaction_dynamic_level_bytesfalseWhether to enable level_compaction_dynamic_level_bytes, if it’s enabled we give max_bytes_for_level_multiplier a priority against max_bytes_for_level_base, the bytes of base level is dynamic for a more predictable LSM tree, it is useful to limit worse case space amplification. Turning this feature on/off for an existing DB can cause unexpected LSM tree structure so it’s not recommended.
    rocksdb.max_bytes_for_level_base536870912The upper-bound of the total size of level-1 files in bytes.
    rocksdb.max_bytes_for_level_multiplier10.0The ratio between the total size of level (L+1) files and the total size of level L files for all L.
    rocksdb.max_open_files-1The maximum number of open files that can be cached by RocksDB, -1 means no limit.
    rocksdb.max_subcompactions4The value represents the maximum number of threads per compaction job.
    rocksdb.max_write_buffer_number6The maximum number of write buffers that are built up in memory.
    rocksdb.max_write_buffer_number_to_maintain0The total maximum number of write buffers to maintain in memory.
    rocksdb.min_write_buffer_number_to_merge2The minimum number of write buffers that will be merged together.
    rocksdb.num_levels7Set the number of levels for this database.
    rocksdb.optimize_filters_for_hitsfalseThis flag allows us to not store filters for the last level.
    rocksdb.optimize_modetrueOptimize for heavy workloads and big datasets.
    rocksdb.pin_l0_filter_and_index_blocks_in_cachefalseIndicating if we’d put index/filter blocks to the block cache.
    rocksdb.sst_pathThe path for ingesting SST file into RocksDB.
    rocksdb.target_file_size_base67108864The target file size for compaction in bytes.
    rocksdb.target_file_size_multiplier1The size ratio between a level L file and a level (L+1) file.
    rocksdb.use_direct_io_for_flush_and_compactionfalseEnable the OS to use direct read/writes in flush and compaction.
    rocksdb.use_direct_readsfalseEnable the OS to use direct I/O for reading sst tables.
    rocksdb.write_buffer_size134217728Amount of data in bytes to build up in memory.
    rocksdb.max_manifest_file_size104857600The max size of manifest file in bytes.
    rocksdb.skip_stats_update_on_db_openfalseWhether to skip statistics update when opening the database, setting this flag true allows us to not update statistics.
    rocksdb.max_file_opening_threads16The max number of threads used to open files.
    rocksdb.max_total_wal_size0Total size of WAL files in bytes. Once WALs exceed this size, we will start forcing the flush of column families related, 0 means no limit.
    rocksdb.db_write_buffer_size0Total size of write buffers in bytes across all column families, 0 means no limit.
    rocksdb.delete_obsolete_files_period21600The periodicity in seconds when obsolete files get deleted, 0 means always do full purge.
    rocksdb.hard_pending_compaction_bytes_limit274877906944The hard limit to impose on pending compaction in bytes.
    rocksdb.level0_file_num_compaction_trigger2Number of files to trigger level-0 compaction.
    rocksdb.level0_slowdown_writes_trigger20Soft limit on number of level-0 files for slowing down writes.
    rocksdb.level0_stop_writes_trigger36Hard limit on number of level-0 files for stopping writes.
    rocksdb.soft_pending_compaction_bytes_limit68719476736The soft limit to impose on pending compaction in bytes.

    HBase Backend Config Options

    config optiondefault valuedescription
    backendMust be set to hbase.
    serializerMust be set to hbase.
    hbase.hostslocalhostThe hostnames or ip addresses of HBase zookeeper, separated with commas.
    hbase.port2181The port address of HBase zookeeper.
    hbase.threads_max64The max threads num of hbase connections.
    hbase.znode_parent/hbaseThe znode parent path of HBase zookeeper.
    hbase.zk_retry3The recovery retry times of HBase zookeeper.
    hbase.aggregation_timeout43200The timeout in seconds of waiting for aggregation.
    hbase.kerberos_enablefalseIs Kerberos authentication enabled for HBase.
    hbase.kerberos_keytabThe HBase’s key tab file for kerberos authentication.
    hbase.kerberos_principalThe HBase’s principal for kerberos authentication.
    hbase.krb5_confetc/krb5.confKerberos configuration file, including KDC IP, default realm, etc.
    hbase.hbase_site/etc/hbase/conf/hbase-site.xmlThe HBase’s configuration file
    hbase.enable_partitiontrueIs pre-split partitions enabled for HBase.
    hbase.vertex_partitions10The number of partitions of the HBase vertex table.
    hbase.edge_partitions30The number of partitions of the HBase edge table.

    MySQL & PostgreSQL Backend Config Options

    config optiondefault valuedescription
    backendMust be set to mysql.
    serializerMust be set to mysql.
    jdbc.drivercom.mysql.jdbc.DriverThe JDBC driver class to connect database.
    jdbc.urljdbc:mysql://127.0.0.1:3306The url of database in JDBC format.
    jdbc.usernamerootThe username to login database.
    jdbc.password******The password corresponding to jdbc.username.
    jdbc.ssl_modefalseThe SSL mode of connections with database.
    jdbc.reconnect_interval3The interval(seconds) between reconnections when the database connection fails.
    jdbc.reconnect_max_times3The reconnect times when the database connection fails.
    jdbc.storage_engineInnoDBThe storage engine of backend store database, like InnoDB/MyISAM/RocksDB for MySQL.
    jdbc.postgresql.connect_databasetemplate1The database used to connect when init store, drop store or check store exist.

    PostgreSQL Backend Config Options

    config optiondefault valuedescription
    backendMust be set to postgresql.
    serializerMust be set to postgresql.

    Other options are consistent with the MySQL backend.

    The driver and url of the PostgreSQL backend should be set to:

    • jdbc.driver=org.postgresql.Driver
    • jdbc.url=jdbc:postgresql://localhost:5432/

    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    diff --git a/docs/config/index.html b/docs/config/index.html index 48f38b608..cb33674e8 100644 --- a/docs/config/index.html +++ b/docs/config/index.html @@ -4,7 +4,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    Config


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    Config


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/contribution-guidelines/_print/index.html b/docs/contribution-guidelines/_print/index.html index 3318468ea..208697b73 100644 --- a/docs/contribution-guidelines/_print/index.html +++ b/docs/contribution-guidelines/_print/index.html @@ -46,7 +46,7 @@ git rebase -i master

    And push it to GitHub fork repo again:

    # force push the local commit to fork repo
     git push -f origin bugfix-branch:bugfix-branch
    -

    GitHub will automatically update the Pull Request after we push it, just wait for code review.

    2 - Subscribe Mailing Lists

    It is highly recommended to subscribe to the development mailing list to keep up-to-date with the community.

    In the process of using HugeGraph, if you have any questions or ideas, suggestions, you can participate in the HugeGraph community building through the Apache mailing list. Sending a subscription email is also very simple, the steps are as follows:

    1. Email dev-subscribe@hugegraph.apache.org with your own email address, subject and content are arbitrary.

    2. Receive confirmation email and reply. After completing step 1, you will receive a confirmation email from dev-help@hugegraph.apache.org (if not received, please confirm whether the email is automatically classified as spam, promotion email, subscription email, etc.) . Then reply directly to the email, or click on the link in the email to reply quickly, the subject and content are arbitrary.

    3. Receive a welcome email. After completing the above steps, you will receive a welcome email with the subject WELCOME to dev@hugegraph.apache.org, and you have successfully subscribed to the Apache HugeGraph mailing list.

    Unsubscribe Mailing Lists

    If you do not need to know what’s going on with HugeGraph, you can unsubscribe from the mailing list.

    Unsubscribe from the mailing list steps are as follows:

    1. Email dev-unsubscribe@hugegraph.apache.org with your subscribed email address, subject and content are arbitrary.

    2. Receive confirmation email and reply. After completing step 1, you will receive a confirmation email from dev-help@hugegraph.apache.org (if not received, please confirm whether the email is automatically classified as spam, promotion email, subscription email, etc.) . Then reply directly to the email, or click on the link in the email to reply quickly, the subject and content are arbitrary.

    3. Receive a goodbye email. After completing the above steps, you will receive a goodbye email with the subject GOODBYE from dev@hugegraph.apache.org, and you have successfully unsubscribed to the Apache HugeGraph mailing list, and you will not receive emails from dev@hugegraph.apache.org.

    +

    GitHub will automatically update the Pull Request after we push it, just wait for code review.

    2 - Subscribe Mailing Lists

    It is highly recommended to subscribe to the development mailing list to keep up-to-date with the community.

    In the process of using HugeGraph, if you have any questions or ideas, suggestions, you can participate in the HugeGraph community building through the Apache mailing list. Sending a subscription email is also very simple, the steps are as follows:

    1. Email dev-subscribe@hugegraph.apache.org with your own email address, subject and content are arbitrary.

    2. Receive confirmation email and reply. After completing step 1, you will receive a confirmation email from dev-help@hugegraph.apache.org (if not received, please confirm whether the email is automatically classified as spam, promotion email, subscription email, etc.) . Then reply directly to the email, or click on the link in the email to reply quickly, the subject and content are arbitrary.

    3. Receive a welcome email. After completing the above steps, you will receive a welcome email with the subject WELCOME to dev@hugegraph.apache.org, and you have successfully subscribed to the Apache HugeGraph mailing list.

    Unsubscribe Mailing Lists

    If you do not need to know what’s going on with HugeGraph, you can unsubscribe from the mailing list.

    Unsubscribe from the mailing list steps are as follows:

    1. Email dev-unsubscribe@hugegraph.apache.org with your subscribed email address, subject and content are arbitrary.

    2. Receive confirmation email and reply. After completing step 1, you will receive a confirmation email from dev-help@hugegraph.apache.org (if not received, please confirm whether the email is automatically classified as spam, promotion email, subscription email, etc.) . Then reply directly to the email, or click on the link in the email to reply quickly, the subject and content are arbitrary.

    3. Receive a goodbye email. After completing the above steps, you will receive a goodbye email with the subject GOODBYE from dev@hugegraph.apache.org, and you have successfully unsubscribed to the Apache HugeGraph mailing list, and you will not receive emails from dev@hugegraph.apache.org.

    diff --git a/docs/contribution-guidelines/contribute/index.html b/docs/contribution-guidelines/contribute/index.html index 039e4299c..9dffbbdad 100644 --- a/docs/contribution-guidelines/contribute/index.html +++ b/docs/contribution-guidelines/contribute/index.html @@ -62,7 +62,7 @@ git rebase -i master

    And push it to GitHub fork repo again:

    # force push the local commit to fork repo
     git push -f origin bugfix-branch:bugfix-branch
    -

    GitHub will automatically update the Pull Request after we push it, just wait for code review.


    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    +

    GitHub will automatically update the Pull Request after we push it, just wait for code review.


    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    diff --git a/docs/contribution-guidelines/index.html b/docs/contribution-guidelines/index.html index 62378e861..4a17bd8ef 100644 --- a/docs/contribution-guidelines/index.html +++ b/docs/contribution-guidelines/index.html @@ -4,7 +4,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    Contribution Guidelines


    + Print entire section

    Contribution Guidelines


    diff --git a/docs/contribution-guidelines/subscribe/index.html b/docs/contribution-guidelines/subscribe/index.html index ad3e7cd82..a4aa641ed 100644 --- a/docs/contribution-guidelines/subscribe/index.html +++ b/docs/contribution-guidelines/subscribe/index.html @@ -14,7 +14,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    Subscribe Mailing Lists

    It is highly recommended to subscribe to the development mailing list to keep up-to-date with the community.

    In the process of using HugeGraph, if you have any questions or ideas, suggestions, you can participate in the HugeGraph community building through the Apache mailing list. Sending a subscription email is also very simple, the steps are as follows:

    1. Email dev-subscribe@hugegraph.apache.org with your own email address, subject and content are arbitrary.

    2. Receive confirmation email and reply. After completing step 1, you will receive a confirmation email from dev-help@hugegraph.apache.org (if not received, please confirm whether the email is automatically classified as spam, promotion email, subscription email, etc.) . Then reply directly to the email, or click on the link in the email to reply quickly, the subject and content are arbitrary.

    3. Receive a welcome email. After completing the above steps, you will receive a welcome email with the subject WELCOME to dev@hugegraph.apache.org, and you have successfully subscribed to the Apache HugeGraph mailing list.

    Unsubscribe Mailing Lists

    If you do not need to know what’s going on with HugeGraph, you can unsubscribe from the mailing list.

    Unsubscribe from the mailing list steps are as follows:

    1. Email dev-unsubscribe@hugegraph.apache.org with your subscribed email address, subject and content are arbitrary.

    2. Receive confirmation email and reply. After completing step 1, you will receive a confirmation email from dev-help@hugegraph.apache.org (if not received, please confirm whether the email is automatically classified as spam, promotion email, subscription email, etc.) . Then reply directly to the email, or click on the link in the email to reply quickly, the subject and content are arbitrary.

    3. Receive a goodbye email. After completing the above steps, you will receive a goodbye email with the subject GOODBYE from dev@hugegraph.apache.org, and you have successfully unsubscribed to the Apache HugeGraph mailing list, and you will not receive emails from dev@hugegraph.apache.org.


    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    + Print entire section

    Subscribe Mailing Lists

    It is highly recommended to subscribe to the development mailing list to keep up-to-date with the community.

    In the process of using HugeGraph, if you have any questions or ideas, suggestions, you can participate in the HugeGraph community building through the Apache mailing list. Sending a subscription email is also very simple, the steps are as follows:

    1. Email dev-subscribe@hugegraph.apache.org with your own email address, subject and content are arbitrary.

    2. Receive confirmation email and reply. After completing step 1, you will receive a confirmation email from dev-help@hugegraph.apache.org (if not received, please confirm whether the email is automatically classified as spam, promotion email, subscription email, etc.) . Then reply directly to the email, or click on the link in the email to reply quickly, the subject and content are arbitrary.

    3. Receive a welcome email. After completing the above steps, you will receive a welcome email with the subject WELCOME to dev@hugegraph.apache.org, and you have successfully subscribed to the Apache HugeGraph mailing list.

    Unsubscribe Mailing Lists

    If you do not need to know what’s going on with HugeGraph, you can unsubscribe from the mailing list.

    Unsubscribe from the mailing list steps are as follows:

    1. Email dev-unsubscribe@hugegraph.apache.org with your subscribed email address, subject and content are arbitrary.

    2. Receive confirmation email and reply. After completing step 1, you will receive a confirmation email from dev-help@hugegraph.apache.org (if not received, please confirm whether the email is automatically classified as spam, promotion email, subscription email, etc.) . Then reply directly to the email, or click on the link in the email to reply quickly, the subject and content are arbitrary.

    3. Receive a goodbye email. After completing the above steps, you will receive a goodbye email with the subject GOODBYE from dev@hugegraph.apache.org, and you have successfully unsubscribed to the Apache HugeGraph mailing list, and you will not receive emails from dev@hugegraph.apache.org.


    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    diff --git a/docs/download/download/index.html b/docs/download/download/index.html index 977d632dc..7908f06b4 100644 --- a/docs/download/download/index.html +++ b/docs/download/download/index.html @@ -20,7 +20,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    Download HugeGraph

    Latest version

    The latest HugeGraph: 0.12.0, released on 2021-12-31.

    componentsdescriptiondownload
    HugeGraph-ServerThe main program of HugeGraph0.12.0
    HugeGraph-HubbleWeb-based Visual Graphical Interface1.6.0
    HugeGraph-LoaderData import tool0.12.0
    HugeGraph-ToolsCommand line toolset1.6.0

    Versions mapping

    serverclientloaderhubblecommontools
    0.12.02.0.10.12.01.6.02.0.11.6.0
    0.11.21.9.10.11.11.5.01.8.11.5.0
    0.10.41.8.00.10.10.10.01.6.161.4.0
    0.9.21.7.00.9.00.9.01.6.01.3.0
    0.8.01.6.40.8.00.8.01.5.31.2.0
    0.7.41.5.80.7.00.7.01.4.91.1.0
    0.6.11.5.60.6.10.6.11.4.31.0.0
    0.5.61.5.00.5.60.5.01.4.0
    0.4.51.4.70.2.20.4.11.3.12

    Note: The latest graph analysis and display platform is Hubble, which supports server v0.10 +.


    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    + Print entire section

    Download HugeGraph

    Latest version

    The latest HugeGraph: 0.12.0, released on 2021-12-31.

    componentsdescriptiondownload
    HugeGraph-ServerThe main program of HugeGraph0.12.0
    HugeGraph-HubbleWeb-based Visual Graphical Interface1.6.0
    HugeGraph-LoaderData import tool0.12.0
    HugeGraph-ToolsCommand line toolset1.6.0

    Versions mapping

    serverclientloaderhubblecommontools
    0.12.02.0.10.12.01.6.02.0.11.6.0
    0.11.21.9.10.11.11.5.01.8.11.5.0
    0.10.41.8.00.10.10.10.01.6.161.4.0
    0.9.21.7.00.9.00.9.01.6.01.3.0
    0.8.01.6.40.8.00.8.01.5.31.2.0
    0.7.41.5.80.7.00.7.01.4.91.1.0
    0.6.11.5.60.6.10.6.11.4.31.0.0
    0.5.61.5.00.5.60.5.01.4.0
    0.4.51.4.70.2.20.4.11.3.12

    Note: The latest graph analysis and display platform is Hubble, which supports server v0.10 +.


    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    diff --git a/docs/guides/_print/index.html b/docs/guides/_print/index.html index 43e017f39..6854a4529 100644 --- a/docs/guides/_print/index.html +++ b/docs/guides/_print/index.html @@ -306,7 +306,7 @@ # | %23 & | %26 = | %3D -
  • 查询某一类别的顶点或边(query by label)时提示超时

    由于属于某一label的数据量可能比较多,请加上limit限制。

  • 通过RESTful API操作图是可以的,但是发送Gremlin语句就报错:Request Failed(500)

    可能是GremlinServer的配置有误,检查gremlin-server.yamlhostport是否与rest-server.propertiesgremlinserver.url匹配,如不匹配则修改,然后重启服务。

  • 使用Loader导数据出现Socket Timeout异常,然后导致Loader中断

    持续地导入数据会使Server的压力过大,然后导致有些请求超时。可以通过调整Loader的参数来适当缓解Server压力(如:重试次数,重试间隔,错误容忍数等),降低该问题出现频率。

  • 如何删除全部的顶点和边,RESTful API中没有这样的接口,调用gremling.V().drop()会报错Vertices in transaction have reached capacity xxx

    目前确实没有好办法删除全部的数据,用户如果是自己部署的Server和后端,可以直接清空数据库,重启Server。可以使用paging API或scan API先获取所有数据,再逐条删除。

  • 清空了数据库,并且执行了init-store,但是添加schema时提示"xxx has existed"

    HugeGraphServer内是有缓存的,清空数据库的同时是需要重启Server的,否则残留的缓存会产生不一致。

  • 插入顶点或边的过程中报错:Id max length is 128, but got xxx {yyy}Big id max length is 32768, but got xxx

    为了保证查询性能,目前的后端存储对id列的长度做了限制,顶点id不能超过128字节,边id长度不能超过32768字节,索引id不能超过128字节。

  • 是否支持嵌套属性,如果不支持,是否有什么替代方案

    嵌套属性目前暂不支持。替代方案:可以把嵌套属性作为单独的顶点拿出来,然后用边连接起来。

  • 一个EdgeLabel是否可以连接多对VertexLabel,比如"投资"关系,可以是"个人"投资"企业",也可以是"企业"投资"企业"

    一个EdgeLabel不支持连接多对VertexLabel,需要用户将EdgeLabel拆分得更细一点,如:“个人投资”,“企业投资”。

  • 通过RestAPI发送请求时提示HTTP 415 Unsupported Media Type

    请求头中需要指定Content-Type:application/json

  • 其他问题可以在对应项目的 issue 区搜索,例如 Server-Issues / Loader Issues

    +
  • 查询某一类别的顶点或边(query by label)时提示超时

    由于属于某一label的数据量可能比较多,请加上limit限制。

  • 通过RESTful API操作图是可以的,但是发送Gremlin语句就报错:Request Failed(500)

    可能是GremlinServer的配置有误,检查gremlin-server.yamlhostport是否与rest-server.propertiesgremlinserver.url匹配,如不匹配则修改,然后重启服务。

  • 使用Loader导数据出现Socket Timeout异常,然后导致Loader中断

    持续地导入数据会使Server的压力过大,然后导致有些请求超时。可以通过调整Loader的参数来适当缓解Server压力(如:重试次数,重试间隔,错误容忍数等),降低该问题出现频率。

  • 如何删除全部的顶点和边,RESTful API中没有这样的接口,调用gremling.V().drop()会报错Vertices in transaction have reached capacity xxx

    目前确实没有好办法删除全部的数据,用户如果是自己部署的Server和后端,可以直接清空数据库,重启Server。可以使用paging API或scan API先获取所有数据,再逐条删除。

  • 清空了数据库,并且执行了init-store,但是添加schema时提示"xxx has existed"

    HugeGraphServer内是有缓存的,清空数据库的同时是需要重启Server的,否则残留的缓存会产生不一致。

  • 插入顶点或边的过程中报错:Id max length is 128, but got xxx {yyy}Big id max length is 32768, but got xxx

    为了保证查询性能,目前的后端存储对id列的长度做了限制,顶点id不能超过128字节,边id长度不能超过32768字节,索引id不能超过128字节。

  • 是否支持嵌套属性,如果不支持,是否有什么替代方案

    嵌套属性目前暂不支持。替代方案:可以把嵌套属性作为单独的顶点拿出来,然后用边连接起来。

  • 一个EdgeLabel是否可以连接多对VertexLabel,比如"投资"关系,可以是"个人"投资"企业",也可以是"企业"投资"企业"

    一个EdgeLabel不支持连接多对VertexLabel,需要用户将EdgeLabel拆分得更细一点,如:“个人投资”,“企业投资”。

  • 通过RestAPI发送请求时提示HTTP 415 Unsupported Media Type

    请求头中需要指定Content-Type:application/json

  • 其他问题可以在对应项目的 issue 区搜索,例如 Server-Issues / Loader Issues

    diff --git a/docs/guides/architectural/index.html b/docs/guides/architectural/index.html index 1b3561687..ec3cdb2b6 100644 --- a/docs/guides/architectural/index.html +++ b/docs/guides/architectural/index.html @@ -11,7 +11,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph Architecture Overview

    1 概述

    作为一款通用的图数据库产品,HugeGraph需具备图数据的基本功能,如下图所示。HugeGraph包括三个层次的功能,分别是存储层、计算层和用户接口层。 HugeGraph支持OLTP和OLAP两种图计算类型,其中OLTP实现了Apache TinkerPop3框架,并支持Gremlin查询语言。 OLAP计算是基于SparkGraphX实现。

    image

    2 组件

    HugeGraph的主要功能分为HugeCore、ApiServer、HugeGraph-Client、HugeGraph-Loader和HugeGraph-Studio等组件构成,各组件之间的通信关系如下图所示。

    image

    Last modified November 27, 2022: Add HugeGraph-Computer Doc (#155) (19ab2ff)
    + Print entire section

    HugeGraph Architecture Overview

    1 概述

    作为一款通用的图数据库产品,HugeGraph需具备图数据的基本功能,如下图所示。HugeGraph包括三个层次的功能,分别是存储层、计算层和用户接口层。 HugeGraph支持OLTP和OLAP两种图计算类型,其中OLTP实现了Apache TinkerPop3框架,并支持Gremlin查询语言。 OLAP计算是基于SparkGraphX实现。

    image

    2 组件

    HugeGraph的主要功能分为HugeCore、ApiServer、HugeGraph-Client、HugeGraph-Loader和HugeGraph-Studio等组件构成,各组件之间的通信关系如下图所示。

    image

    Last modified November 27, 2022: Add HugeGraph-Computer Doc (#155) (19ab2ff)
    diff --git a/docs/guides/backup-restore/index.html b/docs/guides/backup-restore/index.html index 894d981b1..b514aedc4 100644 --- a/docs/guides/backup-restore/index.html +++ b/docs/guides/backup-restore/index.html @@ -49,7 +49,7 @@
    Response Body
    {
         "mode": "RESTORING"
     }
    -

    Last modified April 17, 2022: rebuild doc (ef36544)
    +
    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/guides/custom-plugin/index.html b/docs/guides/custom-plugin/index.html index 2d8b3f888..836a5fb69 100644 --- a/docs/guides/custom-plugin/index.html +++ b/docs/guides/custom-plugin/index.html @@ -213,7 +213,7 @@ } }

    4. 配置SPI入口

    1. 确保services目录存在:hugegraph-plugin-demo/resources/META-INF/services
    2. 在services目录下建立文本文件:com.baidu.hugegraph.plugin.HugeGraphPlugin
    3. 文件内容如下:com.baidu.hugegraph.plugin.DemoPlugin

    5. 打Jar包

    通过maven打包,在项目目录下执行命令mvn package,在target目录下会生成Jar包文件。 -使用时将该Jar包拷到plugins目录,重启服务即可生效。


    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    +使用时将该Jar包拷到plugins目录,重启服务即可生效。


    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    diff --git a/docs/guides/desgin-concept/index.html b/docs/guides/desgin-concept/index.html index e4f85d5dd..43100f433 100644 --- a/docs/guides/desgin-concept/index.html +++ b/docs/guides/desgin-concept/index.html @@ -115,7 +115,7 @@ assert !graph.vertices().hasNext(); assert !graph.edges().hasNext(); } -
    事务实现原理
    注意

    RESTful API暂时未暴露事务接口

    TinkerPop API允许打开事务,请求完成时会自动关闭(Gremlin Server强制关闭)


    Last modified April 17, 2022: rebuild doc (ef36544)
    +
    事务实现原理
    注意

    RESTful API暂时未暴露事务接口

    TinkerPop API允许打开事务,请求完成时会自动关闭(Gremlin Server强制关闭)


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/guides/faq/index.html b/docs/guides/faq/index.html index b977181b9..ea2a7dd7b 100644 --- a/docs/guides/faq/index.html +++ b/docs/guides/faq/index.html @@ -81,7 +81,7 @@ # | %23 & | %26 = | %3D -
  • 查询某一类别的顶点或边(query by label)时提示超时

    由于属于某一label的数据量可能比较多,请加上limit限制。

  • 通过RESTful API操作图是可以的,但是发送Gremlin语句就报错:Request Failed(500)

    可能是GremlinServer的配置有误,检查gremlin-server.yamlhostport是否与rest-server.propertiesgremlinserver.url匹配,如不匹配则修改,然后重启服务。

  • 使用Loader导数据出现Socket Timeout异常,然后导致Loader中断

    持续地导入数据会使Server的压力过大,然后导致有些请求超时。可以通过调整Loader的参数来适当缓解Server压力(如:重试次数,重试间隔,错误容忍数等),降低该问题出现频率。

  • 如何删除全部的顶点和边,RESTful API中没有这样的接口,调用gremling.V().drop()会报错Vertices in transaction have reached capacity xxx

    目前确实没有好办法删除全部的数据,用户如果是自己部署的Server和后端,可以直接清空数据库,重启Server。可以使用paging API或scan API先获取所有数据,再逐条删除。

  • 清空了数据库,并且执行了init-store,但是添加schema时提示"xxx has existed"

    HugeGraphServer内是有缓存的,清空数据库的同时是需要重启Server的,否则残留的缓存会产生不一致。

  • 插入顶点或边的过程中报错:Id max length is 128, but got xxx {yyy}Big id max length is 32768, but got xxx

    为了保证查询性能,目前的后端存储对id列的长度做了限制,顶点id不能超过128字节,边id长度不能超过32768字节,索引id不能超过128字节。

  • 是否支持嵌套属性,如果不支持,是否有什么替代方案

    嵌套属性目前暂不支持。替代方案:可以把嵌套属性作为单独的顶点拿出来,然后用边连接起来。

  • 一个EdgeLabel是否可以连接多对VertexLabel,比如"投资"关系,可以是"个人"投资"企业",也可以是"企业"投资"企业"

    一个EdgeLabel不支持连接多对VertexLabel,需要用户将EdgeLabel拆分得更细一点,如:“个人投资”,“企业投资”。

  • 通过RestAPI发送请求时提示HTTP 415 Unsupported Media Type

    请求头中需要指定Content-Type:application/json

  • 其他问题可以在对应项目的 issue 区搜索,例如 Server-Issues / Loader Issues


    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    +
  • 查询某一类别的顶点或边(query by label)时提示超时

    由于属于某一label的数据量可能比较多,请加上limit限制。

  • 通过RESTful API操作图是可以的,但是发送Gremlin语句就报错:Request Failed(500)

    可能是GremlinServer的配置有误,检查gremlin-server.yamlhostport是否与rest-server.propertiesgremlinserver.url匹配,如不匹配则修改,然后重启服务。

  • 使用Loader导数据出现Socket Timeout异常,然后导致Loader中断

    持续地导入数据会使Server的压力过大,然后导致有些请求超时。可以通过调整Loader的参数来适当缓解Server压力(如:重试次数,重试间隔,错误容忍数等),降低该问题出现频率。

  • 如何删除全部的顶点和边,RESTful API中没有这样的接口,调用gremling.V().drop()会报错Vertices in transaction have reached capacity xxx

    目前确实没有好办法删除全部的数据,用户如果是自己部署的Server和后端,可以直接清空数据库,重启Server。可以使用paging API或scan API先获取所有数据,再逐条删除。

  • 清空了数据库,并且执行了init-store,但是添加schema时提示"xxx has existed"

    HugeGraphServer内是有缓存的,清空数据库的同时是需要重启Server的,否则残留的缓存会产生不一致。

  • 插入顶点或边的过程中报错:Id max length is 128, but got xxx {yyy}Big id max length is 32768, but got xxx

    为了保证查询性能,目前的后端存储对id列的长度做了限制,顶点id不能超过128字节,边id长度不能超过32768字节,索引id不能超过128字节。

  • 是否支持嵌套属性,如果不支持,是否有什么替代方案

    嵌套属性目前暂不支持。替代方案:可以把嵌套属性作为单独的顶点拿出来,然后用边连接起来。

  • 一个EdgeLabel是否可以连接多对VertexLabel,比如"投资"关系,可以是"个人"投资"企业",也可以是"企业"投资"企业"

    一个EdgeLabel不支持连接多对VertexLabel,需要用户将EdgeLabel拆分得更细一点,如:“个人投资”,“企业投资”。

  • 通过RestAPI发送请求时提示HTTP 415 Unsupported Media Type

    请求头中需要指定Content-Type:application/json

  • 其他问题可以在对应项目的 issue 区搜索,例如 Server-Issues / Loader Issues


    Last modified September 15, 2022: cn: format table & fix typo (#150) (53bf0aa)
    diff --git a/docs/guides/index.html b/docs/guides/index.html index d940fd433..c11b3674c 100644 --- a/docs/guides/index.html +++ b/docs/guides/index.html @@ -4,7 +4,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    GUIDES


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    GUIDES


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/index.html b/docs/index.html index c8b65a85f..f5d226e84 100644 --- a/docs/index.html +++ b/docs/index.html @@ -5,7 +5,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    Documentation

    Welcome to HugeGraph docs


    Last modified April 21, 2022: update homepage (2b9c356)
    + Print entire section

    Documentation

    Welcome to HugeGraph docs


    Last modified April 21, 2022: update homepage (2b9c356)
    diff --git a/docs/introduction/readme/index.html b/docs/introduction/readme/index.html index 2fabeab9e..a4ae5b808 100644 --- a/docs/introduction/readme/index.html +++ b/docs/introduction/readme/index.html @@ -10,7 +10,7 @@ implemented the Apache TinkerPop3 framework and is fully compatible with the Gremlin query language, With complete toolchain components, it helps users to easily build applications and products based on graph databases. HugeGraph supports fast import of more than 10 billion vertices and edges, and provides millisecond-level relational query capability (OLTP). It supports large-scale distributed graph computing (OLAP).

    Typical application scenarios of HugeGraph include deep relationship exploration, association analysis, path search, feature extraction, data clustering, community detection, knowledge graph, etc., and are applicable to business fields such as network security, telecommunication fraud, financial risk control, advertising recommendation, social network and intelligence Robots etc.

    Typical application scenarios of HugeGraph include deep relationship exploration, association analysis, path search, feature extraction, data clustering, community detection, knowledge graph, etc., and are applicable to business fields such as network security, telecommunication fraud, financial risk control, advertising recommendation, social network and intelligence Robots etc.

    Features

    HugeGraph supports graph operations in online and offline environments, supports batch import of data, supports efficient complex relationship analysis, and can be seamlessly integrated with big data platforms. -HugeGraph supports multi-user parallel operations. Users can enter Gremlin query statements and get graph query results in time. They can also call HugeGraph API in user programs for graph analysis or query.

    This system has the following features:

    The functions of this system include but are not limited to:

    Modules

    Contact Us


    Last modified November 27, 2022: improve doc (26a2e8d)
    +HugeGraph supports multi-user parallel operations. Users can enter Gremlin query statements and get graph query results in time. They can also call HugeGraph API in user programs for graph analysis or query.

    This system has the following features:

    The functions of this system include but are not limited to:

    Modules

    Contact Us


    Last modified November 27, 2022: improve doc (26a2e8d)
    diff --git a/docs/language/_print/index.html b/docs/language/_print/index.html index c895b5e9c..0dc8e4ae8 100644 --- a/docs/language/_print/index.html +++ b/docs/language/_print/index.html @@ -69,7 +69,7 @@ // what is the name of the brother and the name of the place? g.V(pluto).out('brother').as('god').out('lives').as('place').select('god','place').by('name') -

    推荐使用HugeGraph-Studio 通过可视化的方式来执行上述代码。另外也可以通过HugeGraph-Client、HugeApi、GremlinConsole和GremlinDriver等多种方式执行上述代码。

    3.2 总结

    HugeGraph 目前支持 Gremlin 的语法,用户可以通过 Gremlin / REST-API 实现各种查询需求。

    +

    推荐使用HugeGraph-Studio 通过可视化的方式来执行上述代码。另外也可以通过HugeGraph-Client、HugeApi、GremlinConsole和GremlinDriver等多种方式执行上述代码。

    3.2 总结

    HugeGraph 目前支持 Gremlin 的语法,用户可以通过 Gremlin / REST-API 实现各种查询需求。

    diff --git a/docs/language/hugegraph-example/index.html b/docs/language/hugegraph-example/index.html index 70cd756ea..83818c181 100644 --- a/docs/language/hugegraph-example/index.html +++ b/docs/language/hugegraph-example/index.html @@ -97,7 +97,7 @@ // what is the name of the brother and the name of the place? g.V(pluto).out('brother').as('god').out('lives').as('place').select('god','place').by('name') -

    推荐使用HugeGraph-Studio 通过可视化的方式来执行上述代码。另外也可以通过HugeGraph-Client、HugeApi、GremlinConsole和GremlinDriver等多种方式执行上述代码。

    3.2 总结

    HugeGraph 目前支持 Gremlin 的语法,用户可以通过 Gremlin / REST-API 实现各种查询需求。


    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    +

    推荐使用HugeGraph-Studio 通过可视化的方式来执行上述代码。另外也可以通过HugeGraph-Client、HugeApi、GremlinConsole和GremlinDriver等多种方式执行上述代码。

    3.2 总结

    HugeGraph 目前支持 Gremlin 的语法,用户可以通过 Gremlin / REST-API 实现各种查询需求。


    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    diff --git a/docs/language/hugegraph-gremlin/index.html b/docs/language/hugegraph-gremlin/index.html index 45c9888dd..2e7b21711 100644 --- a/docs/language/hugegraph-gremlin/index.html +++ b/docs/language/hugegraph-gremlin/index.html @@ -18,7 +18,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph Gremlin

    概述

    HugeGraph支持Apache TinkerPop3的图形遍历查询语言Gremlin。 SQL是关系型数据库查询语言,而Gremlin是一种通用的图数据库查询语言,Gremlin可用于创建图的实体(Vertex和Edge)、修改实体内部属性、删除实体,也可执行图的查询操作。

    Gremlin可用于创建图的实体(Vertex和Edge)、修改实体内部属性、删除实体,更主要的是可用于执行图的查询及分析操作。

    TinkerPop Features

    HugeGraph实现了TinkerPop框架,但是并没有实现TinkerPop所有的特性。

    下表列出HugeGraph对TinkerPop各种特性的支持情况:

    Graph Features

    NameDescriptionSupport
    ComputerDetermines if the {@code Graph} implementation supports {@link GraphComputer} based processingfalse
    TransactionsDetermines if the {@code Graph} implementations supports transactions.true
    PersistenceDetermines if the {@code Graph} implementation supports persisting it’s contents natively to disk.This feature does not refer to every graph’s ability to write to disk via the Gremlin IO packages(.e.g. GraphML), unless the graph natively persists to disk via those options somehow. For example,TinkerGraph does not support this feature as it is a pure in-sideEffects graph.true
    ThreadedTransactionsDetermines if the {@code Graph} implementation supports threaded transactions which allow a transaction be executed across multiple threads via {@link Transaction#createThreadedTx()}.false
    ConcurrentAccessDetermines if the {@code Graph} implementation supports more than one connection to the same instance at the same time. For example, Neo4j embedded does not support this feature because concurrent access to the same database files by multiple instances is not possible. However, Neo4j HA could support this feature as each new {@code Graph} instance coordinates with the Neo4j cluster allowing multiple instances to operate on the same database.false

    Vertex Features

    NameDescriptionSupport
    UserSuppliedIdsDetermines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier data type that the {@link Graph} will accept.false
    NumericIdsDetermines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    StringIdsDetermines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    UuidIdsDetermines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    CustomIdsDetermines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB’s {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    AnyIdsDetermines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}.false
    AddPropertyDetermines if an {@link Element} allows properties to be added. This feature is set independently from supporting “data types” and refers to support of calls to {@link Element#property(String, Object)}.true
    RemovePropertyDetermines if an {@link Element} allows properties to be removed.true
    AddVerticesDetermines if a {@link Vertex} can be added to the {@code Graph}.true
    MultiPropertiesDetermines if a {@link Vertex} can support multiple properties with the same key.false
    DuplicateMultiPropertiesDetermines if a {@link Vertex} can support non-unique values on the same key. For this value to be {@code true}, then {@link #supportsMetaProperties()} must also return true. By default this method, just returns what {@link #supportsMultiProperties()} returns.false
    MetaPropertiesDetermines if a {@link Vertex} can support properties on vertex properties. It is assumed that a graph will support all the same data types for meta-properties that are supported for regular properties.false
    RemoveVerticesDetermines if a {@link Vertex} can be removed from the {@code Graph}.true

    Edge Features

    NameDescriptionSupport
    UserSuppliedIdsDetermines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier data type that the {@link Graph} will accept.false
    NumericIdsDetermines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    StringIdsDetermines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    UuidIdsDetermines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    CustomIdsDetermines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB’s {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    AnyIdsDetermines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}.false
    AddPropertyDetermines if an {@link Element} allows properties to be added. This feature is set independently from supporting “data types” and refers to support of calls to {@link Element#property(String, Object)}.true
    RemovePropertyDetermines if an {@link Element} allows properties to be removed.true
    AddEdgesDetermines if an {@link Edge} can be added to a {@code Vertex}.true
    RemoveEdgesDetermines if an {@link Edge} can be removed from a {@code Vertex}.true

    Data Type Features

    NameDescriptionSupport
    BooleanValuestrue
    ByteValuestrue
    DoubleValuestrue
    FloatValuestrue
    IntegerValuestrue
    LongValuestrue
    MapValuesSupports setting of a {@code Map} value. The assumption is that the {@code Map} can contain arbitrary serializable values that may or may not be defined as a feature itselffalse
    MixedListValuesSupports setting of a {@code List} value. The assumption is that the {@code List} can contain arbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is “mixed” it does not need to contain objects of the same type.false
    BooleanArrayValuesfalse
    ByteArrayValuestrue
    DoubleArrayValuesfalse
    FloatArrayValuesfalse
    IntegerArrayValuesfalse
    LongArrayValuesfalse
    SerializableValuesfalse
    StringArrayValuesfalse
    StringValuestrue
    UniformListValuesSupports setting of a {@code List} value. The assumption is that the {@code List} can contain arbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is “uniform” it must contain objects of the same type.false

    Gremlin的步骤

    HugeGraph支持Gremlin的所有步骤。有关Gremlin的完整参考信息,请参与Gremlin官网

    步骤说明文档
    addE在两个顶点之间添加边addE step
    addV将顶点添加到图形addV step
    and确保所有遍历都返回值and step
    as用于向步骤的输出分配变量的步骤调制器as step
    bygrouporder配合使用的步骤调制器by step
    coalesce返回第一个返回结果的遍历coalesce step
    constant返回常量值。 与coalesce配合使用constant step
    count从遍历返回计数count step
    dedup返回已删除重复内容的值dedup step
    drop丢弃值(顶点/边缘)drop step
    fold充当用于计算结果聚合值的屏障fold step
    group根据指定的标签将值分组group step
    has用于筛选属性、顶点和边缘。 支持hasLabelhasIdhasNothas 变体has step
    inject将值注入流中inject step
    is用于通过布尔表达式执行筛选器is step
    limit用于限制遍历中的项数limit step
    local本地包装遍历的某个部分,类似于子查询local step
    not用于生成筛选器的求反结果not step
    optional如果生成了某个结果,则返回指定遍历的结果,否则返回调用元素optional step
    or确保至少有一个遍历会返回值or step
    order按指定的排序顺序返回结果order step
    path返回遍历的完整路径path step
    project将属性投影为映射project step
    properties返回指定标签的属性properties step
    range根据指定的值范围进行筛选range step
    repeat将步骤重复指定的次数。 用于循环repeat step
    sample用于对遍历返回的结果采样sample step
    select用于投影遍历返回的结果select step
    store用于遍历返回的非阻塞聚合store step
    tree将顶点中的路径聚合到树中tree step
    unfold将迭代器作为步骤展开unfold step
    union合并多个遍历返回的结果union step
    V包括顶点与边之间的遍历所需的步骤:VEoutinbothoutEinEbothEoutVinVbothVotherVorder step
    where用于筛选遍历返回的结果。 支持 eqneqltltegtgtebetween 运算符where step

    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    + Print entire section

    HugeGraph Gremlin

    概述

    HugeGraph支持Apache TinkerPop3的图形遍历查询语言Gremlin。 SQL是关系型数据库查询语言,而Gremlin是一种通用的图数据库查询语言,Gremlin可用于创建图的实体(Vertex和Edge)、修改实体内部属性、删除实体,也可执行图的查询操作。

    Gremlin可用于创建图的实体(Vertex和Edge)、修改实体内部属性、删除实体,更主要的是可用于执行图的查询及分析操作。

    TinkerPop Features

    HugeGraph实现了TinkerPop框架,但是并没有实现TinkerPop所有的特性。

    下表列出HugeGraph对TinkerPop各种特性的支持情况:

    Graph Features

    NameDescriptionSupport
    ComputerDetermines if the {@code Graph} implementation supports {@link GraphComputer} based processingfalse
    TransactionsDetermines if the {@code Graph} implementations supports transactions.true
    PersistenceDetermines if the {@code Graph} implementation supports persisting it’s contents natively to disk.This feature does not refer to every graph’s ability to write to disk via the Gremlin IO packages(.e.g. GraphML), unless the graph natively persists to disk via those options somehow. For example,TinkerGraph does not support this feature as it is a pure in-sideEffects graph.true
    ThreadedTransactionsDetermines if the {@code Graph} implementation supports threaded transactions which allow a transaction be executed across multiple threads via {@link Transaction#createThreadedTx()}.false
    ConcurrentAccessDetermines if the {@code Graph} implementation supports more than one connection to the same instance at the same time. For example, Neo4j embedded does not support this feature because concurrent access to the same database files by multiple instances is not possible. However, Neo4j HA could support this feature as each new {@code Graph} instance coordinates with the Neo4j cluster allowing multiple instances to operate on the same database.false

    Vertex Features

    NameDescriptionSupport
    UserSuppliedIdsDetermines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier data type that the {@link Graph} will accept.false
    NumericIdsDetermines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    StringIdsDetermines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    UuidIdsDetermines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    CustomIdsDetermines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB’s {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    AnyIdsDetermines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}.false
    AddPropertyDetermines if an {@link Element} allows properties to be added. This feature is set independently from supporting “data types” and refers to support of calls to {@link Element#property(String, Object)}.true
    RemovePropertyDetermines if an {@link Element} allows properties to be removed.true
    AddVerticesDetermines if a {@link Vertex} can be added to the {@code Graph}.true
    MultiPropertiesDetermines if a {@link Vertex} can support multiple properties with the same key.false
    DuplicateMultiPropertiesDetermines if a {@link Vertex} can support non-unique values on the same key. For this value to be {@code true}, then {@link #supportsMetaProperties()} must also return true. By default this method, just returns what {@link #supportsMultiProperties()} returns.false
    MetaPropertiesDetermines if a {@link Vertex} can support properties on vertex properties. It is assumed that a graph will support all the same data types for meta-properties that are supported for regular properties.false
    RemoveVerticesDetermines if a {@link Vertex} can be removed from the {@code Graph}.true

    Edge Features

    NameDescriptionSupport
    UserSuppliedIdsDetermines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier data type that the {@link Graph} will accept.false
    NumericIdsDetermines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    StringIdsDetermines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    UuidIdsDetermines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    CustomIdsDetermines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB’s {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.false
    AnyIdsDetermines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}.false
    AddPropertyDetermines if an {@link Element} allows properties to be added. This feature is set independently from supporting “data types” and refers to support of calls to {@link Element#property(String, Object)}.true
    RemovePropertyDetermines if an {@link Element} allows properties to be removed.true
    AddEdgesDetermines if an {@link Edge} can be added to a {@code Vertex}.true
    RemoveEdgesDetermines if an {@link Edge} can be removed from a {@code Vertex}.true

    Data Type Features

    NameDescriptionSupport
    BooleanValuestrue
    ByteValuestrue
    DoubleValuestrue
    FloatValuestrue
    IntegerValuestrue
    LongValuestrue
    MapValuesSupports setting of a {@code Map} value. The assumption is that the {@code Map} can contain arbitrary serializable values that may or may not be defined as a feature itselffalse
    MixedListValuesSupports setting of a {@code List} value. The assumption is that the {@code List} can contain arbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is “mixed” it does not need to contain objects of the same type.false
    BooleanArrayValuesfalse
    ByteArrayValuestrue
    DoubleArrayValuesfalse
    FloatArrayValuesfalse
    IntegerArrayValuesfalse
    LongArrayValuesfalse
    SerializableValuesfalse
    StringArrayValuesfalse
    StringValuestrue
    UniformListValuesSupports setting of a {@code List} value. The assumption is that the {@code List} can contain arbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is “uniform” it must contain objects of the same type.false

    Gremlin的步骤

    HugeGraph支持Gremlin的所有步骤。有关Gremlin的完整参考信息,请参与Gremlin官网

    步骤说明文档
    addE在两个顶点之间添加边addE step
    addV将顶点添加到图形addV step
    and确保所有遍历都返回值and step
    as用于向步骤的输出分配变量的步骤调制器as step
    bygrouporder配合使用的步骤调制器by step
    coalesce返回第一个返回结果的遍历coalesce step
    constant返回常量值。 与coalesce配合使用constant step
    count从遍历返回计数count step
    dedup返回已删除重复内容的值dedup step
    drop丢弃值(顶点/边缘)drop step
    fold充当用于计算结果聚合值的屏障fold step
    group根据指定的标签将值分组group step
    has用于筛选属性、顶点和边缘。 支持hasLabelhasIdhasNothas 变体has step
    inject将值注入流中inject step
    is用于通过布尔表达式执行筛选器is step
    limit用于限制遍历中的项数limit step
    local本地包装遍历的某个部分,类似于子查询local step
    not用于生成筛选器的求反结果not step
    optional如果生成了某个结果,则返回指定遍历的结果,否则返回调用元素optional step
    or确保至少有一个遍历会返回值or step
    order按指定的排序顺序返回结果order step
    path返回遍历的完整路径path step
    project将属性投影为映射project step
    properties返回指定标签的属性properties step
    range根据指定的值范围进行筛选range step
    repeat将步骤重复指定的次数。 用于循环repeat step
    sample用于对遍历返回的结果采样sample step
    select用于投影遍历返回的结果select step
    store用于遍历返回的非阻塞聚合store step
    tree将顶点中的路径聚合到树中tree step
    unfold将迭代器作为步骤展开unfold step
    union合并多个遍历返回的结果union step
    V包括顶点与边之间的遍历所需的步骤:VEoutinbothoutEinEbothEoutVinVbothVotherVorder step
    where用于筛选遍历返回的结果。 支持 eqneqltltegtgtebetween 运算符where step

    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    diff --git a/docs/language/index.html b/docs/language/index.html index fc3f77ab8..4e81c83a0 100644 --- a/docs/language/index.html +++ b/docs/language/index.html @@ -4,7 +4,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    QUERY LANGUAGE


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    QUERY LANGUAGE


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/performance/_print/index.html b/docs/performance/_print/index.html index 398e6fc8e..92a0cbd09 100644 --- a/docs/performance/_print/index.html +++ b/docs/performance/_print/index.html @@ -2,7 +2,7 @@

    1 - HugeGraph BenchMark Performance

    1 测试环境

    1.1 硬件信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD

    1.2 软件信息

    1.2.1 测试用例

    测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:

    • Massive Insertion,批量插入顶点和边,一定数量的顶点或边一次性提交

    • Single Insertion,单条插入,每个顶点或者每条边立即提交

    • Query,主要是图数据库的基本查询操作:

      • Find Neighbors,查询所有顶点的邻居
      • Find Adjacent Nodes,查询所有边的邻接顶点
      • Find Shortest Path,查询第一个顶点到100个随机顶点的最短路径
    • Clustering,基于Louvain Method的社区发现算法

    1.2.2 测试数据集

    测试使用人造数据和真实数据

    本测试用到的数据集规模
    名称vertex数目edge数目文件大小
    email-enron.txt36,691367,6614MB
    com-youtube.ungraph.txt1,157,8062,987,62438.7MB
    amazon0601.txt403,3933,387,38847.9MB
    com-lj.ungraph.txt399796134681189479MB

    1.3 服务配置

    • HugeGraph版本:0.5.6,RestServer和Gremlin Server和backends都在同一台服务器上

      • RocksDB版本:rocksdbjni-5.8.6
    • Titan版本:0.5.4, 使用thrift+Cassandra模式

      • Cassandra版本:cassandra-3.10,commit-log 和 data 共用SSD
    • Neo4j版本:2.0.1

    graphdb-benchmark适配的Titan版本为0.5.4

    2 测试结果

    2.1 Batch插入性能

    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)com-lj.ungraph(3000w)
    HugeGraph0.6295.7115.24367.033
    Titan10.15108.569150.2661217.944
    Neo4j3.88418.93824.890281.537

    说明

    • 表头"()“中数据是数据规模,以边为单位
    • 表中数据是批量插入的时间,单位是s
    • 例如,HugeGraph使用RocksDB插入amazon0601数据集的300w条边,花费5.711s
    结论
    • 批量插入性能 HugeGraph(RocksDB) > Neo4j > Titan(thrift+Cassandra)

    2.2 遍历性能

    2.2.1 术语说明
    • FN(Find Neighbor), 遍历所有vertex, 根据vertex查邻接edge, 通过edge和vertex查other vertex
    • FA(Find Adjacent), 遍历所有edge,根据edge获得source vertex和target vertex
    2.2.2 FN性能
    Backendemail-enron(3.6w)amazon0601(40w)com-youtube.ungraph(120w)com-lj.ungraph(400w)
    HugeGraph4.07245.11866.006609.083
    Titan8.08492.507184.5431099.371
    Neo4j2.42410.53711.609106.919

    说明

    • 表头”()“中数据是数据规模,以顶点为单位
    • 表中数据是遍历顶点花费的时间,单位是s
    • 例如,HugeGraph使用RocksDB后端遍历amazon0601的所有顶点,并查找邻接边和另一顶点,总共耗时45.118s
    2.2.3 FA性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)com-lj.ungraph(3000w)
    HugeGraph1.54010.76411.243151.271
    Titan7.36193.344169.2181085.235
    Neo4j1.6734.7754.28440.507

    说明

    • 表头”()“中数据是数据规模,以边为单位
    • 表中数据是遍历边花费的时间,单位是s
    • 例如,HugeGraph使用RocksDB后端遍历amazon0601的所有边,并查询每条边的两个顶点,总共耗时10.764s
    结论
    • 遍历性能 Neo4j > HugeGraph(RocksDB) > Titan(thrift+Cassandra)

    2.3 HugeGraph-图常用分析方法性能

    术语说明
    • FS(Find Shortest Path), 寻找最短路径
    • K-neighbor,从起始vertex出发,通过K跳边能够到达的所有顶点, 包括1, 2, 3…(K-1), K跳边可达vertex
    • K-out, 从起始vertex出发,恰好经过K跳out边能够到达的顶点
    FS性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)com-lj.ungraph(3000w)
    HugeGraph0.4940.1033.3648.155
    Titan11.8180.239377.709575.678
    Neo4j1.7191.8001.9568.530

    说明

    • 表头”()“中数据是数据规模,以边为单位
    • 表中数据是找到从第一个顶点出发到达随机选择的100个顶点的最短路径的时间,单位是s
    • 例如,HugeGraph使用RocksDB后端在图amazon0601中查找第一个顶点到100个随机顶点的最短路径,总共耗时0.103s
    结论
    • 在数据规模小或者顶点关联关系少的场景下,HugeGraph性能优于Neo4j和Titan
    • 随着数据规模增大且顶点的关联度增高,HugeGraph与Neo4j性能趋近,都远高于Titan
    K-neighbor性能
    顶点深度一度二度三度四度五度六度
    v1时间0.031s0.033s0.048s0.500s11.27sOOM
    v111时间0.027s0.034s0.1151.36sOOM
    v1111时间0.039s0.027s0.052s0.511s10.96sOOM

    说明

    • HugeGraph-Server的JVM内存设置为32GB,数据量过大时会出现OOM
    K-out性能
    顶点深度一度二度三度四度五度六度
    v1时间0.054s0.057s0.109s0.526s3.77sOOM
    10133245350,8301,128,688
    v111时间0.032s0.042s0.136s1.25s20.62sOOM
    1021149441131502,629,970
    v1111时间0.039s0.045s0.053s1.10s2.92sOOM
    101402555508251,070,230

    说明

    • HugeGraph-Server的JVM内存设置为32GB,数据量过大时会出现OOM
    结论
    • FS场景,HugeGraph性能优于Neo4j和Titan
    • K-neighbor和K-out场景,HugeGraph能够实现在5度范围内秒级返回结果

    2.4 图综合性能测试-CW

    数据库规模1000规模5000规模10000规模20000
    HugeGraph(core)20.804242.099744.7801700.547
    Titan45.790820.6332652.2359568.623
    Neo4j5.91350.267142.354460.880

    说明

    • “规模"以顶点为单位
    • 表中数据是社区发现完成需要的时间,单位是s,例如HugeGraph使用RocksDB后端在规模10000的数据集,社区聚合不再变化,需要耗时744.780s
    • CW测试是CRUD的综合评估
    • 该测试中HugeGraph跟Titan一样,没有通过client,直接对core操作
    结论
    • 社区聚类算法性能 Neo4j > HugeGraph > Titan

    2 - HugeGraph-API Performance

    HugeGraph API性能测试主要测试HugeGraph-Server对RESTful API请求的并发处理能力,包括:

    • 顶点/边的单条插入
    • 顶点/边的批量插入
    • 顶点/边的查询

    HugeGraph的每个发布版本的RESTful API的性能测试情况可以参考:

    之前的版本只提供HugeGraph所支持的后端种类中性能最好的API性能测试,从0.5.6版本开始,分别提供了单机和集群的性能情况

    2.1 - v0.5.6 Stand-alone(RocksDB)

    1 测试环境

    被压机器信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD
    • 起压力机器信息:与被压机器同配置
    • 测试工具:apache-Jmeter-2.5.1

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    • Samples – 本次场景中一共完成了多少个线程
    • Average – 平均响应时间
    • Median – 统计意义上面的响应时间的中值
    • 90% Line – 所有线程中90%的线程的响应时间都小于xx
    • Min – 最小响应时间
    • Max – 最大响应时间
    • Error – 出错率
    • Throughput – 吞吐量
    • KB/sec – 以流量做衡量的吞吐量

    2.2 底层存储

    后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph单条插入顶点和边的速度在每秒1w左右
    2. 顶点和边的批量插入速度远大于单条插入速度
    3. 按id查询顶点和边的并发度可达到13000以上,且请求的平均延时小于50ms

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点的最大插入速度:
    image

    ####### 结论:

    • 并发2200,顶点的吞吐量是2026.8,每秒可处理的数据:2026.8*200=405360/s
    边的最大插入速度
    image

    ####### 结论:

    • 并发900,边的吞吐量是776.9,每秒可处理的数据:776.9*500=388450/s

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    顶点的单条插入
    image

    ####### 结论:

    • 并发11500,吞吐量为10730,顶点的单条插入并发能力为11500
    边的单条插入
    image

    ####### 结论:

    • 并发9000,吞吐量是8418,边的单条插入并发能力为9000

    4.3 按id查询

    4.3.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    顶点的按id查询
    image

    ####### 结论:

    • 并发14000,吞吐量是12663,顶点的按id查询的并发能力为14000,平均延时为44ms
    边的按id查询
    image

    ####### 结论:

    • 并发13000,吞吐量是12225,边的按id查询的并发能力为13000,平均延时为12ms

    2.2 - v0.5.6 Cluster(Cassandra)

    1 测试环境

    被压机器信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD
    • 起压力机器信息:与被压机器同配置
    • 测试工具:apache-Jmeter-2.5.1

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    • Samples – 本次场景中一共完成了多少个线程
    • Average – 平均响应时间
    • Median – 统计意义上面的响应时间的中值
    • 90% Line – 所有线程中90%的线程的响应时间都小于xx
    • Min – 最小响应时间
    • Max – 最大响应时间
    • Error – 出错率
    • Throughput – 吞吐量
    • KB/sec – 以流量做衡量的吞吐量

    2.2 底层存储

    后端存储使用15节点Cassandra集群,HugeGraph与Cassandra集群位于不同的服务器,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph单条插入顶点和边的速度分别为9000和4500
    2. 顶点和边的批量插入速度分别为5w/s和15w/s,远大于单条插入速度
    3. 按id查询顶点和边的并发度可达到12000以上,且请求的平均延时小于70ms

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点的最大插入速度:
    image

    ####### 结论:

    • 并发3500,顶点的吞吐量是261,每秒可处理的数据:261*200=52200/s
    边的最大插入速度
    image

    ####### 结论:

    • 并发1000,边的吞吐量是323,每秒可处理的数据:323*500=161500/s

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    顶点的单条插入
    image

    ####### 结论:

    • 并发9000,吞吐量为8400,顶点的单条插入并发能力为9000
    边的单条插入
    image

    ####### 结论:

    • 并发4500,吞吐量是4160,边的单条插入并发能力为4500

    4.3 按id查询

    4.3.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    顶点的按id查询
    image

    ####### 结论:

    • 并发14500,吞吐量是13576,顶点的按id查询的并发能力为14500,平均延时为11ms
    边的按id查询
    image

    ####### 结论:

    • 并发12000,吞吐量是10688,边的按id查询的并发能力为12000,平均延时为63ms

    2.3 - v0.4.4

    1 测试环境

    被压机器信息

    机器编号CPUMemory网卡磁盘
    124 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz61G1000Mbps1.4T HDD
    248 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD
    • 起压力机器信息:与编号 1 机器同配置
    • 测试工具:apache-Jmeter-2.5.1

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    • Samples – 本次场景中一共完成了多少个线程
    • Average – 平均响应时间
    • Median – 统计意义上面的响应时间的中值
    • 90% Line – 所有线程中90%的线程的响应时间都小于xx
    • Min – 最小响应时间
    • Max – 最大响应时间
    • Error – 出错率
    • Throughput – 吞吐量
    • KB/sec – 以流量做衡量的吞吐量

    2.2 底层存储

    后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph每秒能够处理的请求数目上限是7000
    2. 批量插入速度远大于单条插入,在服务器上测试结果达到22w edges/s,37w vertices/s
    3. 后端是RocksDB,增大CPU数目和内存大小可以增大批量插入的性能。CPU和内存扩大一倍,性能增加45%-60%
    4. 批量插入场景,使用SSD替代HDD,性能提升较小,只有3%-5%

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点和边的最大插入速度(高性能服务器,使用SSD存储RocksDB数据):
    image
    结论:
    • 并发1000,边的吞吐量是是451,每秒可处理的数据:451*500条=225500/s
    • 并发2000,顶点的吞吐量是1842.4,每秒可处理的数据:1842.4*200=368480/s

    1. CPU和内存对插入性能的影响(服务器都使用HDD存储RocksDB数据,批量插入)

    image
    结论:
    • 同样使用HDD硬盘,CPU和内存增加了1倍
    • 边:吞吐量从268提升至426,性能提升了约60%
    • 顶点:吞吐量从1263.8提升至1842.4,性能提升了约45%

    2. SSD和HDD对插入性能的影响(高性能服务器,批量插入)

    image
    结论:
    • 边:使用SSD吞吐量451.7,使用HDD吞吐量426.6,性能提升5%
    • 顶点:使用SSD吞吐量1842.4,使用HDD吞吐量1794,性能提升约3%

    3. 不同并发线程数对插入性能的影响(普通服务器,使用HDD存储RocksDB数据)

    image
    结论:
    • 顶点:1000并发,响应时间7ms和1500并发响应时间1028ms差距悬殊,且吞吐量一直保持在1300左右,因此拐点数据应该在1300 ,且并发1300时,响应时间已达到22ms,在可控范围内,相比HugeGraph 0.2(1000并发:平均响应时间8959ms),处理能力出现质的飞跃;
    • 边:从1000并发到2000并发,处理时间过长,超过3s,且吞吐量几乎在270左右浮动,因此继续增大并发线程数吞吐量不会再大幅增长,270 是一个拐点,跟HugeGraph 0.2版本(1000并发:平均响应时间31849ms)相比较,处理能力提升非常明显;

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    image
    结论:
    • 顶点:
      • 4000并发:正常,无错误率,平均耗时小于1ms, 6000并发无错误,平均耗时5ms,在可接受范围内;
      • 8000并发:存在0.01%的错误,已经无法处理,出现connection timeout错误,顶峰应该在7000左右
    • 边:
      • 4000并发:响应时间1ms,6000并发无任何异常,平均响应时间8ms,主要差异在于 IO network recv和send以及CPU);
      • 8000并发:存在0.01%的错误率,平均耗15ms,拐点应该在7000左右,跟顶点结果匹配;

    2.4 - v0.2

    1 测试环境

    1.1 软硬件信息

    起压和被压机器配置相同,基本参数如下:

    CPUMemory网卡
    24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz61G1000Mbps

    测试工具:apache-Jmeter-2.5.1

    1.2 服务配置

    • HugeGraph版本:0.2
    • 后端存储:使用服务内嵌的cassandra-3.10,单点部署;
    • 后端配置修改:修改了cassandra.yaml文件中的以下两个属性,其余选项均保持默认
      batch_size_warn_threshold_in_kb: 1000
       batch_size_fail_threshold_in_kb: 1000
    -
    • HugeGraphServer 与 HugeGremlinServer 与cassandra都在同一机器上启动,server 相关的配置文件除主机和端口有修改外,其余均保持默认。

    1.3 名词解释

    • Samples – 本次场景中一共完成了多少个线程
    • Average – 平均响应时间
    • Median – 统计意义上面的响应时间的中值
    • 90% Line – 所有线程中90%的线程的响应时间都小于xx
    • Min – 最小响应时间
    • Max – 最大响应时间
    • Error – 出错率
    • Troughput – 吞吐量Â
    • KB/sec – 以流量做衡量的吞吐量

    注:时间的单位均为ms

    2 测试结果

    2.1 schema

    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    property_keys33100011201720.00%920.7/sec178.1
    vertex_labels33100012211260.00%920.7/sec193.4
    edge_labels33100022311580.00%920.7/sec242.8

    结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力

    2.2 single 插入

    2.2.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    • 并发量:1000
    • 持续时间:5min
    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    single_insert_vertices3310000110210.00%920.7/sec234.4
    single_insert_edges3310002231530.00%920.7/sec309.1
    结论
    • 顶点:平均响应时间1ms,每个请求插入一条数据,平均每秒处理920个请求,则每秒平均总共处理的数据为1*920约等于920条数据;
    • 边:平均响应时间1ms,每个请求插入一条数据,平均每秒处理920个请求,则每秒平均总共处理的数据为1*920约等于920条数据;
    2.2.2 压力上限测试

    测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    性能指标
    ConcurrencySamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    2000(vertex)661916111030120.00%1842.9/sec469.1
    4000(vertex)131612413114090230.00%3673.1/sec935.0
    5000(vertex)1468121101011351227092230.06%4095.6/sec1046.0
    7000(vertex)1378454161717081886093610.08%3860.3/sec987.1
    2000(edge)62939995310431113190010.00%1750.3/sec587.6
    3000(edge)648364225824042500290010.00%1810.7/sec607.9
    4000(edge)649904199221122211190010.06%1812.5/sec608.5
    结论
    • 顶点:
      • 4000并发:正常,无错误率,平均耗时13ms;
      • 5000并发:每秒处理5000个数据的插入,就会存在0.06%的错误,应该已经处理不了了,顶峰应该在4000
    • 边:
      • 1000并发:响应时间2ms,跟2000并发的响应时间相差较多,主要是 IO network rec和send以及CPU几乎增加了一倍);
      • 2000并发:每秒处理2000个数据的插入,平均耗时953ms,平均每秒处理1750个请求;
      • 3000并发:每秒处理3000个数据的插入,平均耗时2258ms,平均每秒处理1810个请求;
      • 4000并发:每秒处理4000个数据的插入,平均每秒处理1812个请求;

    2.3 batch 插入

    2.3.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    • 并发量:1000
    • 持续时间:5min
    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    batch_insert_vertices371628959959597041798520.00%103.4/sec393.3
    batch_insert_edges10800318493454435132435357470.00%28.8/sec814.9
    结论
    • 顶点:平均响应时间为8959ms,处理时间过长。每个请求插入199条数据,平均每秒处理103个请求,则每秒平均总共处理的数据为199*131约等于2w条数据;
    • 边:平均响应时间31849ms,处理时间过长。每个请求插入499个数据,平均每秒处理28个请求,则每秒平均总共处理的数据为28*499约等于13900条数据;

    3 - HugeGraph-Loader Performance

    使用场景

    当要批量插入的图数据(包括顶点和边)条数为billion级别及以下,或者总数据量小于TB时,可以采用HugeGraph-Loader工具持续、高速导入图数据

    性能

    测试均采用网址数据的边数据

    RocksDB单机性能

    • 关闭label index,22.8w edges/s
    • 开启label index,15.3w edges/s

    Cassandra集群性能

    • 默认开启label index,6.3w edges/s

    4 -

    1 测试环境

    1.1 硬件信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD

    1.2 软件信息

    1.2.1 测试用例

    测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:

    • Massive Insertion,批量插入顶点和边,一定数量的顶点或边一次性提交

    • Single Insertion,单条插入,每个顶点或者每条边立即提交

    • Query,主要是图数据库的基本查询操作:

      • Find Neighbors,查询所有顶点的邻居
      • Find Adjacent Nodes,查询所有边的邻接顶点
      • Find Shortest Path,查询第一个顶点到100个随机顶点的最短路径
    • Clustering,基于Louvain Method的社区发现算法

    1.2.2 测试数据集

    测试使用人造数据和真实数据

    本测试用到的数据集规模
    名称vertex数目edge数目文件大小
    email-enron.txt36,691367,6614MB
    com-youtube.ungraph.txt1,157,8062,987,62438.7MB
    amazon0601.txt403,3933,387,38847.9MB

    1.3 服务配置

    • HugeGraph版本:0.4.4,RestServer和Gremlin Server和backends都在同一台服务器上
    • Cassandra版本:cassandra-3.10,commit-log 和data共用SSD
    • RocksDB版本:rocksdbjni-5.8.6
    • Titan版本:0.5.4, 使用thrift+Cassandra模式

    graphdb-benchmark适配的Titan版本为0.5.4

    2 测试结果

    2.1 Batch插入性能

    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan9.51688.123111.586
    RocksDB2.34514.07616.636
    Cassandra11.930108.709101.959
    Memory3.07715.20413.841

    说明

    • 表头"()“中数据是数据规模,以边为单位
    • 表中数据是批量插入的时间,单位是s
    • 例如,HugeGraph使用RocksDB插入amazon0601数据集的300w条边,花费14.076s,速度约为21w edges/s
    结论
    • RocksDB和Memory后端插入性能优于Cassandra
    • HugeGraph和Titan同样使用Cassandra作为后端的情况下,插入性能接近

    2.2 遍历性能

    2.2.1 术语说明
    • FN(Find Neighbor), 遍历所有vertex, 根据vertex查邻接edge, 通过edge和vertex查other vertex
    • FA(Find Adjacent), 遍历所有edge,根据edge获得source vertex和target vertex
    2.2.2 FN性能
    Backendemail-enron(3.6w)amazon0601(40w)com-youtube.ungraph(120w)
    Titan7.72470.935128.884
    RocksDB8.87665.85263.388
    Cassandra13.125126.959102.580
    Memory22.309207.411165.609

    说明

    • 表头”()“中数据是数据规模,以顶点为单位
    • 表中数据是遍历顶点花费的时间,单位是s
    • 例如,HugeGraph使用RocksDB后端遍历amazon0601的所有顶点,并查找邻接边和另一顶点,总共耗时65.852s
    2.2.3 FA性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan7.11963.353115.633
    RocksDB6.03264.52652.721
    Cassandra9.410102.76694.197
    Memory12.340195.444140.89

    说明

    • 表头”()“中数据是数据规模,以边为单位
    • 表中数据是遍历边花费的时间,单位是s
    • 例如,HugeGraph使用RocksDB后端遍历amazon0601的所有边,并查询每条边的两个顶点,总共耗时64.526s
    结论
    • HugeGraph RocksDB > Titan thrift+Cassandra > HugeGraph Cassandra > HugeGraph Memory

    2.3 HugeGraph-图常用分析方法性能

    术语说明
    • FS(Find Shortest Path), 寻找最短路径
    • K-neighbor,从起始vertex出发,通过K跳边能够到达的所有顶点, 包括1, 2, 3…(K-1), K跳边可达vertex
    • K-out, 从起始vertex出发,恰好经过K跳out边能够到达的顶点
    FS性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan11.3330.313376.06
    RocksDB44.3912.221268.792
    Cassandra39.8453.337331.113
    Memory35.6382.059388.987

    说明

    • 表头”()“中数据是数据规模,以边为单位
    • 表中数据是找到从第一个顶点出发到达随机选择的100个顶点的最短路径的时间,单位是s
    • 例如,HugeGraph使用RocksDB查找第一个顶点到100个随机顶点的最短路径,总共耗时2.059s
    结论
    • 在数据规模小或者顶点关联关系少的场景下,Titan最短路径性能优于HugeGraph
    • 随着数据规模增大且顶点的关联度增高,HugeGraph最短路径性能优于Titan
    K-neighbor性能
    顶点深度一度二度三度四度五度六度
    v1时间0.031s0.033s0.048s0.500s11.27sOOM
    v111时间0.027s0.034s0.1151.36sOOM
    v1111时间0.039s0.027s0.052s0.511s10.96sOOM

    说明

    • HugeGraph-Server的JVM内存设置为32GB,数据量过大时会出现OOM
    K-out性能
    顶点深度一度二度三度四度五度六度
    v1时间0.054s0.057s0.109s0.526s3.77sOOM
    10133245350,8301,128,688
    v111时间0.032s0.042s0.136s1.25s20.62sOOM
    1021149441131502,629,970
    v1111时间0.039s0.045s0.053s1.10s2.92sOOM
    101402555508251,070,230

    说明

    • HugeGraph-Server的JVM内存设置为32GB,数据量过大时会出现OOM
    结论
    • FS场景,HugeGraph性能优于Titan
    • K-neighbor和K-out场景,HugeGraph能够实现在5度范围内秒级返回结果

    2.4 图综合性能测试-CW

    数据库规模1000规模5000规模10000规模20000
    Titan45.943849.1682737.1179791.46
    Memory(core)41.0771825.905**
    Cassandra(core)39.783862.7442423.1366564.191
    RocksDB(core)33.383199.894763.8691677.813

    说明

    • “规模"以顶点为单位
    • 表中数据是社区发现完成需要的时间,单位是s,例如HugeGraph使用RocksDB后端在规模10000的数据集,社区聚合不再变化,需要耗时763.869s
    • “*“表示超过10000s未完成
    • CW测试是CRUD的综合评估
    • 后三者分别是HugeGraph的不同后端,该测试中HugeGraph跟Titan一样,没有通过client,直接对core操作
    结论
    • HugeGraph在使用Cassandra后端时,性能略优于Titan,随着数据规模的增大,优势越来越明显,数据规模20000时,比Titan快30%
    • HugeGraph在使用RocksDB后端时,性能远高于Titan和HugeGraph的Cassandra后端,分别比两者快了6倍和4倍
    +

    1.3 名词解释

    注:时间的单位均为ms

    2 测试结果

    2.1 schema

    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    property_keys33100011201720.00%920.7/sec178.1
    vertex_labels33100012211260.00%920.7/sec193.4
    edge_labels33100022311580.00%920.7/sec242.8

    结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力

    2.2 single 插入

    2.2.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    single_insert_vertices3310000110210.00%920.7/sec234.4
    single_insert_edges3310002231530.00%920.7/sec309.1
    结论
    2.2.2 压力上限测试

    测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    性能指标
    ConcurrencySamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    2000(vertex)661916111030120.00%1842.9/sec469.1
    4000(vertex)131612413114090230.00%3673.1/sec935.0
    5000(vertex)1468121101011351227092230.06%4095.6/sec1046.0
    7000(vertex)1378454161717081886093610.08%3860.3/sec987.1
    2000(edge)62939995310431113190010.00%1750.3/sec587.6
    3000(edge)648364225824042500290010.00%1810.7/sec607.9
    4000(edge)649904199221122211190010.06%1812.5/sec608.5
    结论

    2.3 batch 插入

    2.3.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    batch_insert_vertices371628959959597041798520.00%103.4/sec393.3
    batch_insert_edges10800318493454435132435357470.00%28.8/sec814.9
    结论

    3 - HugeGraph-Loader Performance

    使用场景

    当要批量插入的图数据(包括顶点和边)条数为billion级别及以下,或者总数据量小于TB时,可以采用HugeGraph-Loader工具持续、高速导入图数据

    性能

    测试均采用网址数据的边数据

    RocksDB单机性能

    Cassandra集群性能

    4 -

    1 测试环境

    1.1 硬件信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD

    1.2 软件信息

    1.2.1 测试用例

    测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:

    1.2.2 测试数据集

    测试使用人造数据和真实数据

    本测试用到的数据集规模
    名称vertex数目edge数目文件大小
    email-enron.txt36,691367,6614MB
    com-youtube.ungraph.txt1,157,8062,987,62438.7MB
    amazon0601.txt403,3933,387,38847.9MB

    1.3 服务配置

    graphdb-benchmark适配的Titan版本为0.5.4

    2 测试结果

    2.1 Batch插入性能

    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan9.51688.123111.586
    RocksDB2.34514.07616.636
    Cassandra11.930108.709101.959
    Memory3.07715.20413.841

    说明

    结论

    2.2 遍历性能

    2.2.1 术语说明
    2.2.2 FN性能
    Backendemail-enron(3.6w)amazon0601(40w)com-youtube.ungraph(120w)
    Titan7.72470.935128.884
    RocksDB8.87665.85263.388
    Cassandra13.125126.959102.580
    Memory22.309207.411165.609

    说明

    2.2.3 FA性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan7.11963.353115.633
    RocksDB6.03264.52652.721
    Cassandra9.410102.76694.197
    Memory12.340195.444140.89

    说明

    结论

    2.3 HugeGraph-图常用分析方法性能

    术语说明
    FS性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan11.3330.313376.06
    RocksDB44.3912.221268.792
    Cassandra39.8453.337331.113
    Memory35.6382.059388.987

    说明

    结论
    K-neighbor性能
    顶点深度一度二度三度四度五度六度
    v1时间0.031s0.033s0.048s0.500s11.27sOOM
    v111时间0.027s0.034s0.1151.36sOOM
    v1111时间0.039s0.027s0.052s0.511s10.96sOOM

    说明

    K-out性能
    顶点深度一度二度三度四度五度六度
    v1时间0.054s0.057s0.109s0.526s3.77sOOM
    10133245350,8301,128,688
    v111时间0.032s0.042s0.136s1.25s20.62sOOM
    1021149441131502,629,970
    v1111时间0.039s0.045s0.053s1.10s2.92sOOM
    101402555508251,070,230

    说明

    结论

    2.4 图综合性能测试-CW

    数据库规模1000规模5000规模10000规模20000
    Titan45.943849.1682737.1179791.46
    Memory(core)41.0771825.905**
    Cassandra(core)39.783862.7442423.1366564.191
    RocksDB(core)33.383199.894763.8691677.813

    说明

    结论
    diff --git a/docs/performance/api-preformance/_print/index.html b/docs/performance/api-preformance/_print/index.html index 430e5ae98..e9737d309 100644 --- a/docs/performance/api-preformance/_print/index.html +++ b/docs/performance/api-preformance/_print/index.html @@ -10,7 +10,7 @@

    This is the multi-page printable view of this section. Click here to print.

    Return to the regular view of this page.

    HugeGraph-API Performance

    HugeGraph API性能测试主要测试HugeGraph-Server对RESTful API请求的并发处理能力,包括:

    • 顶点/边的单条插入
    • 顶点/边的批量插入
    • 顶点/边的查询

    HugeGraph的每个发布版本的RESTful API的性能测试情况可以参考:

    之前的版本只提供HugeGraph所支持的后端种类中性能最好的API性能测试,从0.5.6版本开始,分别提供了单机和集群的性能情况

    1 - v0.5.6 Stand-alone(RocksDB)

    1 测试环境

    被压机器信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD
    • 起压力机器信息:与被压机器同配置
    • 测试工具:apache-Jmeter-2.5.1

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    • Samples – 本次场景中一共完成了多少个线程
    • Average – 平均响应时间
    • Median – 统计意义上面的响应时间的中值
    • 90% Line – 所有线程中90%的线程的响应时间都小于xx
    • Min – 最小响应时间
    • Max – 最大响应时间
    • Error – 出错率
    • Throughput – 吞吐量
    • KB/sec – 以流量做衡量的吞吐量

    2.2 底层存储

    后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph单条插入顶点和边的速度在每秒1w左右
    2. 顶点和边的批量插入速度远大于单条插入速度
    3. 按id查询顶点和边的并发度可达到13000以上,且请求的平均延时小于50ms

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点的最大插入速度:
    image

    ####### 结论:

    • 并发2200,顶点的吞吐量是2026.8,每秒可处理的数据:2026.8*200=405360/s
    边的最大插入速度
    image

    ####### 结论:

    • 并发900,边的吞吐量是776.9,每秒可处理的数据:776.9*500=388450/s

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    顶点的单条插入
    image

    ####### 结论:

    • 并发11500,吞吐量为10730,顶点的单条插入并发能力为11500
    边的单条插入
    image

    ####### 结论:

    • 并发9000,吞吐量是8418,边的单条插入并发能力为9000

    4.3 按id查询

    4.3.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    顶点的按id查询
    image

    ####### 结论:

    • 并发14000,吞吐量是12663,顶点的按id查询的并发能力为14000,平均延时为44ms
    边的按id查询
    image

    ####### 结论:

    • 并发13000,吞吐量是12225,边的按id查询的并发能力为13000,平均延时为12ms

    2 - v0.5.6 Cluster(Cassandra)

    1 测试环境

    被压机器信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD
    • 起压力机器信息:与被压机器同配置
    • 测试工具:apache-Jmeter-2.5.1

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    • Samples – 本次场景中一共完成了多少个线程
    • Average – 平均响应时间
    • Median – 统计意义上面的响应时间的中值
    • 90% Line – 所有线程中90%的线程的响应时间都小于xx
    • Min – 最小响应时间
    • Max – 最大响应时间
    • Error – 出错率
    • Throughput – 吞吐量
    • KB/sec – 以流量做衡量的吞吐量

    2.2 底层存储

    后端存储使用15节点Cassandra集群,HugeGraph与Cassandra集群位于不同的服务器,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph单条插入顶点和边的速度分别为9000和4500
    2. 顶点和边的批量插入速度分别为5w/s和15w/s,远大于单条插入速度
    3. 按id查询顶点和边的并发度可达到12000以上,且请求的平均延时小于70ms

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点的最大插入速度:
    image

    ####### 结论:

    • 并发3500,顶点的吞吐量是261,每秒可处理的数据:261*200=52200/s
    边的最大插入速度
    image

    ####### 结论:

    • 并发1000,边的吞吐量是323,每秒可处理的数据:323*500=161500/s

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    顶点的单条插入
    image

    ####### 结论:

    • 并发9000,吞吐量为8400,顶点的单条插入并发能力为9000
    边的单条插入
    image

    ####### 结论:

    • 并发4500,吞吐量是4160,边的单条插入并发能力为4500

    4.3 按id查询

    4.3.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    顶点的按id查询
    image

    ####### 结论:

    • 并发14500,吞吐量是13576,顶点的按id查询的并发能力为14500,平均延时为11ms
    边的按id查询
    image

    ####### 结论:

    • 并发12000,吞吐量是10688,边的按id查询的并发能力为12000,平均延时为63ms

    3 - v0.4.4

    1 测试环境

    被压机器信息

    机器编号CPUMemory网卡磁盘
    124 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz61G1000Mbps1.4T HDD
    248 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD
    • 起压力机器信息:与编号 1 机器同配置
    • 测试工具:apache-Jmeter-2.5.1

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    • Samples – 本次场景中一共完成了多少个线程
    • Average – 平均响应时间
    • Median – 统计意义上面的响应时间的中值
    • 90% Line – 所有线程中90%的线程的响应时间都小于xx
    • Min – 最小响应时间
    • Max – 最大响应时间
    • Error – 出错率
    • Throughput – 吞吐量
    • KB/sec – 以流量做衡量的吞吐量

    2.2 底层存储

    后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph每秒能够处理的请求数目上限是7000
    2. 批量插入速度远大于单条插入,在服务器上测试结果达到22w edges/s,37w vertices/s
    3. 后端是RocksDB,增大CPU数目和内存大小可以增大批量插入的性能。CPU和内存扩大一倍,性能增加45%-60%
    4. 批量插入场景,使用SSD替代HDD,性能提升较小,只有3%-5%

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点和边的最大插入速度(高性能服务器,使用SSD存储RocksDB数据):
    image
    结论:
    • 并发1000,边的吞吐量是是451,每秒可处理的数据:451*500条=225500/s
    • 并发2000,顶点的吞吐量是1842.4,每秒可处理的数据:1842.4*200=368480/s

    1. CPU和内存对插入性能的影响(服务器都使用HDD存储RocksDB数据,批量插入)

    image
    结论:
    • 同样使用HDD硬盘,CPU和内存增加了1倍
    • 边:吞吐量从268提升至426,性能提升了约60%
    • 顶点:吞吐量从1263.8提升至1842.4,性能提升了约45%

    2. SSD和HDD对插入性能的影响(高性能服务器,批量插入)

    image
    结论:
    • 边:使用SSD吞吐量451.7,使用HDD吞吐量426.6,性能提升5%
    • 顶点:使用SSD吞吐量1842.4,使用HDD吞吐量1794,性能提升约3%

    3. 不同并发线程数对插入性能的影响(普通服务器,使用HDD存储RocksDB数据)

    image
    结论:
    • 顶点:1000并发,响应时间7ms和1500并发响应时间1028ms差距悬殊,且吞吐量一直保持在1300左右,因此拐点数据应该在1300 ,且并发1300时,响应时间已达到22ms,在可控范围内,相比HugeGraph 0.2(1000并发:平均响应时间8959ms),处理能力出现质的飞跃;
    • 边:从1000并发到2000并发,处理时间过长,超过3s,且吞吐量几乎在270左右浮动,因此继续增大并发线程数吞吐量不会再大幅增长,270 是一个拐点,跟HugeGraph 0.2版本(1000并发:平均响应时间31849ms)相比较,处理能力提升非常明显;

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    image
    结论:
    • 顶点:
      • 4000并发:正常,无错误率,平均耗时小于1ms, 6000并发无错误,平均耗时5ms,在可接受范围内;
      • 8000并发:存在0.01%的错误,已经无法处理,出现connection timeout错误,顶峰应该在7000左右
    • 边:
      • 4000并发:响应时间1ms,6000并发无任何异常,平均响应时间8ms,主要差异在于 IO network recv和send以及CPU);
      • 8000并发:存在0.01%的错误率,平均耗15ms,拐点应该在7000左右,跟顶点结果匹配;

    4 - v0.2

    1 测试环境

    1.1 软硬件信息

    起压和被压机器配置相同,基本参数如下:

    CPUMemory网卡
    24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz61G1000Mbps

    测试工具:apache-Jmeter-2.5.1

    1.2 服务配置

    • HugeGraph版本:0.2
    • 后端存储:使用服务内嵌的cassandra-3.10,单点部署;
    • 后端配置修改:修改了cassandra.yaml文件中的以下两个属性,其余选项均保持默认
      batch_size_warn_threshold_in_kb: 1000
       batch_size_fail_threshold_in_kb: 1000
    -
    • HugeGraphServer 与 HugeGremlinServer 与cassandra都在同一机器上启动,server 相关的配置文件除主机和端口有修改外,其余均保持默认。

    1.3 名词解释

    • Samples – 本次场景中一共完成了多少个线程
    • Average – 平均响应时间
    • Median – 统计意义上面的响应时间的中值
    • 90% Line – 所有线程中90%的线程的响应时间都小于xx
    • Min – 最小响应时间
    • Max – 最大响应时间
    • Error – 出错率
    • Troughput – 吞吐量Â
    • KB/sec – 以流量做衡量的吞吐量

    注:时间的单位均为ms

    2 测试结果

    2.1 schema

    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    property_keys33100011201720.00%920.7/sec178.1
    vertex_labels33100012211260.00%920.7/sec193.4
    edge_labels33100022311580.00%920.7/sec242.8

    结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力

    2.2 single 插入

    2.2.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    • 并发量:1000
    • 持续时间:5min
    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    single_insert_vertices3310000110210.00%920.7/sec234.4
    single_insert_edges3310002231530.00%920.7/sec309.1
    结论
    • 顶点:平均响应时间1ms,每个请求插入一条数据,平均每秒处理920个请求,则每秒平均总共处理的数据为1*920约等于920条数据;
    • 边:平均响应时间1ms,每个请求插入一条数据,平均每秒处理920个请求,则每秒平均总共处理的数据为1*920约等于920条数据;
    2.2.2 压力上限测试

    测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    • 持续时间:5min
    • 服务异常标志:错误率大于0.00%
    性能指标
    ConcurrencySamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    2000(vertex)661916111030120.00%1842.9/sec469.1
    4000(vertex)131612413114090230.00%3673.1/sec935.0
    5000(vertex)1468121101011351227092230.06%4095.6/sec1046.0
    7000(vertex)1378454161717081886093610.08%3860.3/sec987.1
    2000(edge)62939995310431113190010.00%1750.3/sec587.6
    3000(edge)648364225824042500290010.00%1810.7/sec607.9
    4000(edge)649904199221122211190010.06%1812.5/sec608.5
    结论
    • 顶点:
      • 4000并发:正常,无错误率,平均耗时13ms;
      • 5000并发:每秒处理5000个数据的插入,就会存在0.06%的错误,应该已经处理不了了,顶峰应该在4000
    • 边:
      • 1000并发:响应时间2ms,跟2000并发的响应时间相差较多,主要是 IO network rec和send以及CPU几乎增加了一倍);
      • 2000并发:每秒处理2000个数据的插入,平均耗时953ms,平均每秒处理1750个请求;
      • 3000并发:每秒处理3000个数据的插入,平均耗时2258ms,平均每秒处理1810个请求;
      • 4000并发:每秒处理4000个数据的插入,平均每秒处理1812个请求;

    2.3 batch 插入

    2.3.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    • 并发量:1000
    • 持续时间:5min
    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    batch_insert_vertices371628959959597041798520.00%103.4/sec393.3
    batch_insert_edges10800318493454435132435357470.00%28.8/sec814.9
    结论
    • 顶点:平均响应时间为8959ms,处理时间过长。每个请求插入199条数据,平均每秒处理103个请求,则每秒平均总共处理的数据为199*131约等于2w条数据;
    • 边:平均响应时间31849ms,处理时间过长。每个请求插入499个数据,平均每秒处理28个请求,则每秒平均总共处理的数据为28*499约等于13900条数据;
    +

    1.3 名词解释

    注:时间的单位均为ms

    2 测试结果

    2.1 schema

    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    property_keys33100011201720.00%920.7/sec178.1
    vertex_labels33100012211260.00%920.7/sec193.4
    edge_labels33100022311580.00%920.7/sec242.8

    结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力

    2.2 single 插入

    2.2.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    single_insert_vertices3310000110210.00%920.7/sec234.4
    single_insert_edges3310002231530.00%920.7/sec309.1
    结论
    2.2.2 压力上限测试

    测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    性能指标
    ConcurrencySamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    2000(vertex)661916111030120.00%1842.9/sec469.1
    4000(vertex)131612413114090230.00%3673.1/sec935.0
    5000(vertex)1468121101011351227092230.06%4095.6/sec1046.0
    7000(vertex)1378454161717081886093610.08%3860.3/sec987.1
    2000(edge)62939995310431113190010.00%1750.3/sec587.6
    3000(edge)648364225824042500290010.00%1810.7/sec607.9
    4000(edge)649904199221122211190010.06%1812.5/sec608.5
    结论

    2.3 batch 插入

    2.3.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    batch_insert_vertices371628959959597041798520.00%103.4/sec393.3
    batch_insert_edges10800318493454435132435357470.00%28.8/sec814.9
    结论
    diff --git a/docs/performance/api-preformance/hugegraph-api-0.2/index.html b/docs/performance/api-preformance/hugegraph-api-0.2/index.html index 8c02f5be8..c5c67ccdf 100644 --- a/docs/performance/api-preformance/hugegraph-api-0.2/index.html +++ b/docs/performance/api-preformance/hugegraph-api-0.2/index.html @@ -35,7 +35,7 @@ Create project issue Print entire section

    v0.2

    1 测试环境

    1.1 软硬件信息

    起压和被压机器配置相同,基本参数如下:

    CPUMemory网卡
    24 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz61G1000Mbps

    测试工具:apache-Jmeter-2.5.1

    1.2 服务配置

      batch_size_warn_threshold_in_kb: 1000
       batch_size_fail_threshold_in_kb: 1000
    -

    1.3 名词解释

    注:时间的单位均为ms

    2 测试结果

    2.1 schema

    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    property_keys33100011201720.00%920.7/sec178.1
    vertex_labels33100012211260.00%920.7/sec193.4
    edge_labels33100022311580.00%920.7/sec242.8

    结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力

    2.2 single 插入

    2.2.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    single_insert_vertices3310000110210.00%920.7/sec234.4
    single_insert_edges3310002231530.00%920.7/sec309.1
    结论
    2.2.2 压力上限测试

    测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    性能指标
    ConcurrencySamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    2000(vertex)661916111030120.00%1842.9/sec469.1
    4000(vertex)131612413114090230.00%3673.1/sec935.0
    5000(vertex)1468121101011351227092230.06%4095.6/sec1046.0
    7000(vertex)1378454161717081886093610.08%3860.3/sec987.1
    2000(edge)62939995310431113190010.00%1750.3/sec587.6
    3000(edge)648364225824042500290010.00%1810.7/sec607.9
    4000(edge)649904199221122211190010.06%1812.5/sec608.5
    结论

    2.3 batch 插入

    2.3.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    batch_insert_vertices371628959959597041798520.00%103.4/sec393.3
    batch_insert_edges10800318493454435132435357470.00%28.8/sec814.9
    结论

    Last modified April 17, 2022: rebuild doc (ef36544)
    +

    1.3 名词解释

    注:时间的单位均为ms

    2 测试结果

    2.1 schema

    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    property_keys33100011201720.00%920.7/sec178.1
    vertex_labels33100012211260.00%920.7/sec193.4
    edge_labels33100022311580.00%920.7/sec242.8

    结论:schema的接口,在1000并发持续5分钟的压力下,平均响应时间1-2ms,无压力

    2.2 single 插入

    2.2.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    single_insert_vertices3310000110210.00%920.7/sec234.4
    single_insert_edges3310002231530.00%920.7/sec309.1
    结论
    2.2.2 压力上限测试

    测试方法:不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    性能指标
    ConcurrencySamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    2000(vertex)661916111030120.00%1842.9/sec469.1
    4000(vertex)131612413114090230.00%3673.1/sec935.0
    5000(vertex)1468121101011351227092230.06%4095.6/sec1046.0
    7000(vertex)1378454161717081886093610.08%3860.3/sec987.1
    2000(edge)62939995310431113190010.00%1750.3/sec587.6
    3000(edge)648364225824042500290010.00%1810.7/sec607.9
    4000(edge)649904199221122211190010.06%1812.5/sec608.5
    结论

    2.3 batch 插入

    2.3.1 插入速率测试
    压力参数

    测试方法:固定并发量,测试server和后端的处理速率

    性能指标
    LabelSamplesAverageMedian90%LineMinMaxError%ThroughputKB/sec
    batch_insert_vertices371628959959597041798520.00%103.4/sec393.3
    batch_insert_edges10800318493454435132435357470.00%28.8/sec814.9
    结论

    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/performance/api-preformance/hugegraph-api-0.4.4/index.html b/docs/performance/api-preformance/hugegraph-api-0.4.4/index.html index 86f971ee9..845b62590 100644 --- a/docs/performance/api-preformance/hugegraph-api-0.4.4/index.html +++ b/docs/performance/api-preformance/hugegraph-api-0.4.4/index.html @@ -32,7 +32,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    v0.4.4

    1 测试环境

    被压机器信息

    机器编号CPUMemory网卡磁盘
    124 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz61G1000Mbps1.4T HDD
    248 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    2.2 底层存储

    后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph每秒能够处理的请求数目上限是7000
    2. 批量插入速度远大于单条插入,在服务器上测试结果达到22w edges/s,37w vertices/s
    3. 后端是RocksDB,增大CPU数目和内存大小可以增大批量插入的性能。CPU和内存扩大一倍,性能增加45%-60%
    4. 批量插入场景,使用SSD替代HDD,性能提升较小,只有3%-5%

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点和边的最大插入速度(高性能服务器,使用SSD存储RocksDB数据):
    image
    结论:

    1. CPU和内存对插入性能的影响(服务器都使用HDD存储RocksDB数据,批量插入)

    image
    结论:

    2. SSD和HDD对插入性能的影响(高性能服务器,批量插入)

    image
    结论:

    3. 不同并发线程数对插入性能的影响(普通服务器,使用HDD存储RocksDB数据)

    image
    结论:

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    image
    结论:

    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    v0.4.4

    1 测试环境

    被压机器信息

    机器编号CPUMemory网卡磁盘
    124 Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz61G1000Mbps1.4T HDD
    248 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    2.2 底层存储

    后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph每秒能够处理的请求数目上限是7000
    2. 批量插入速度远大于单条插入,在服务器上测试结果达到22w edges/s,37w vertices/s
    3. 后端是RocksDB,增大CPU数目和内存大小可以增大批量插入的性能。CPU和内存扩大一倍,性能增加45%-60%
    4. 批量插入场景,使用SSD替代HDD,性能提升较小,只有3%-5%

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点和边的最大插入速度(高性能服务器,使用SSD存储RocksDB数据):
    image
    结论:

    1. CPU和内存对插入性能的影响(服务器都使用HDD存储RocksDB数据,批量插入)

    image
    结论:

    2. SSD和HDD对插入性能的影响(高性能服务器,批量插入)

    image
    结论:

    3. 不同并发线程数对插入性能的影响(普通服务器,使用HDD存储RocksDB数据)

    image
    结论:

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    image
    结论:

    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/performance/api-preformance/hugegraph-api-0.5.6-cassandra/index.html b/docs/performance/api-preformance/hugegraph-api-0.5.6-cassandra/index.html index bbc0a1fc3..17cf9ab21 100644 --- a/docs/performance/api-preformance/hugegraph-api-0.5.6-cassandra/index.html +++ b/docs/performance/api-preformance/hugegraph-api-0.5.6-cassandra/index.html @@ -35,7 +35,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    v0.5.6 Cluster(Cassandra)

    1 测试环境

    被压机器信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    2.2 底层存储

    后端存储使用15节点Cassandra集群,HugeGraph与Cassandra集群位于不同的服务器,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph单条插入顶点和边的速度分别为9000和4500
    2. 顶点和边的批量插入速度分别为5w/s和15w/s,远大于单条插入速度
    3. 按id查询顶点和边的并发度可达到12000以上,且请求的平均延时小于70ms

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点的最大插入速度:
    image

    ####### 结论:

    边的最大插入速度
    image

    ####### 结论:

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    顶点的单条插入
    image

    ####### 结论:

    边的单条插入
    image

    ####### 结论:

    4.3 按id查询

    4.3.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    顶点的按id查询
    image

    ####### 结论:

    边的按id查询
    image

    ####### 结论:


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    v0.5.6 Cluster(Cassandra)

    1 测试环境

    被压机器信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    2.2 底层存储

    后端存储使用15节点Cassandra集群,HugeGraph与Cassandra集群位于不同的服务器,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph单条插入顶点和边的速度分别为9000和4500
    2. 顶点和边的批量插入速度分别为5w/s和15w/s,远大于单条插入速度
    3. 按id查询顶点和边的并发度可达到12000以上,且请求的平均延时小于70ms

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点的最大插入速度:
    image

    ####### 结论:

    边的最大插入速度
    image

    ####### 结论:

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    顶点的单条插入
    image

    ####### 结论:

    边的单条插入
    image

    ####### 结论:

    4.3 按id查询

    4.3.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    顶点的按id查询
    image

    ####### 结论:

    边的按id查询
    image

    ####### 结论:


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/performance/api-preformance/hugegraph-api-0.5.6-rocksdb/index.html b/docs/performance/api-preformance/hugegraph-api-0.5.6-rocksdb/index.html index f6a9ed19a..94e21ad4c 100644 --- a/docs/performance/api-preformance/hugegraph-api-0.5.6-rocksdb/index.html +++ b/docs/performance/api-preformance/hugegraph-api-0.5.6-rocksdb/index.html @@ -35,7 +35,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    v0.5.6 Stand-alone(RocksDB)

    1 测试环境

    被压机器信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    2.2 底层存储

    后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph单条插入顶点和边的速度在每秒1w左右
    2. 顶点和边的批量插入速度远大于单条插入速度
    3. 按id查询顶点和边的并发度可达到13000以上,且请求的平均延时小于50ms

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点的最大插入速度:
    image

    ####### 结论:

    边的最大插入速度
    image

    ####### 结论:

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    顶点的单条插入
    image

    ####### 结论:

    边的单条插入
    image

    ####### 结论:

    4.3 按id查询

    4.3.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    顶点的按id查询
    image

    ####### 结论:

    边的按id查询
    image

    ####### 结论:


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    v0.5.6 Stand-alone(RocksDB)

    1 测试环境

    被压机器信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD,2.7T HDD

    注:起压机器和被压机器在同一机房

    2 测试说明

    2.1 名词定义(时间的单位均为ms)

    2.2 底层存储

    后端存储使用RocksDB,HugeGraph与RocksDB都在同一机器上启动,server相关的配置文件除主机和端口有修改外,其余均保持默认。

    3 性能结果总结

    1. HugeGraph单条插入顶点和边的速度在每秒1w左右
    2. 顶点和边的批量插入速度远大于单条插入速度
    3. 按id查询顶点和边的并发度可达到13000以上,且请求的平均延时小于50ms

    4 测试结果及分析

    4.1 batch插入

    4.1.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数

    持续时间:5min

    顶点的最大插入速度:
    image

    ####### 结论:

    边的最大插入速度
    image

    ####### 结论:

    4.2 single插入

    4.2.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    顶点的单条插入
    image

    ####### 结论:

    边的单条插入
    image

    ####### 结论:

    4.3 按id查询

    4.3.1 压力上限测试
    测试方法

    不断提升并发量,测试server仍能正常提供服务的压力上限

    压力参数
    顶点的按id查询
    image

    ####### 结论:

    边的按id查询
    image

    ####### 结论:


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/performance/api-preformance/index.html b/docs/performance/api-preformance/index.html index 61eab82ce..145ee79e4 100644 --- a/docs/performance/api-preformance/index.html +++ b/docs/performance/api-preformance/index.html @@ -12,7 +12,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph-API Performance

    HugeGraph API性能测试主要测试HugeGraph-Server对RESTful API请求的并发处理能力,包括:

    HugeGraph的每个发布版本的RESTful API的性能测试情况可以参考:

    之前的版本只提供HugeGraph所支持的后端种类中性能最好的API性能测试,从0.5.6版本开始,分别提供了单机和集群的性能情况


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    HugeGraph-API Performance

    HugeGraph API性能测试主要测试HugeGraph-Server对RESTful API请求的并发处理能力,包括:

    HugeGraph的每个发布版本的RESTful API的性能测试情况可以参考:

    之前的版本只提供HugeGraph所支持的后端种类中性能最好的API性能测试,从0.5.6版本开始,分别提供了单机和集群的性能情况


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/performance/hugegraph-benchmark-0.4.4/index.html b/docs/performance/hugegraph-benchmark-0.4.4/index.html index bb7c1a3ca..351ca3e78 100644 --- a/docs/performance/hugegraph-benchmark-0.4.4/index.html +++ b/docs/performance/hugegraph-benchmark-0.4.4/index.html @@ -61,7 +61,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    1 测试环境

    1.1 硬件信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD

    1.2 软件信息

    1.2.1 测试用例

    测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:

    1.2.2 测试数据集

    测试使用人造数据和真实数据

    本测试用到的数据集规模
    名称vertex数目edge数目文件大小
    email-enron.txt36,691367,6614MB
    com-youtube.ungraph.txt1,157,8062,987,62438.7MB
    amazon0601.txt403,3933,387,38847.9MB

    1.3 服务配置

    graphdb-benchmark适配的Titan版本为0.5.4

    2 测试结果

    2.1 Batch插入性能

    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan9.51688.123111.586
    RocksDB2.34514.07616.636
    Cassandra11.930108.709101.959
    Memory3.07715.20413.841

    说明

    结论

    2.2 遍历性能

    2.2.1 术语说明
    2.2.2 FN性能
    Backendemail-enron(3.6w)amazon0601(40w)com-youtube.ungraph(120w)
    Titan7.72470.935128.884
    RocksDB8.87665.85263.388
    Cassandra13.125126.959102.580
    Memory22.309207.411165.609

    说明

    2.2.3 FA性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan7.11963.353115.633
    RocksDB6.03264.52652.721
    Cassandra9.410102.76694.197
    Memory12.340195.444140.89

    说明

    结论

    2.3 HugeGraph-图常用分析方法性能

    术语说明
    FS性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan11.3330.313376.06
    RocksDB44.3912.221268.792
    Cassandra39.8453.337331.113
    Memory35.6382.059388.987

    说明

    结论
    K-neighbor性能
    顶点深度一度二度三度四度五度六度
    v1时间0.031s0.033s0.048s0.500s11.27sOOM
    v111时间0.027s0.034s0.1151.36sOOM
    v1111时间0.039s0.027s0.052s0.511s10.96sOOM

    说明

    K-out性能
    顶点深度一度二度三度四度五度六度
    v1时间0.054s0.057s0.109s0.526s3.77sOOM
    10133245350,8301,128,688
    v111时间0.032s0.042s0.136s1.25s20.62sOOM
    1021149441131502,629,970
    v1111时间0.039s0.045s0.053s1.10s2.92sOOM
    101402555508251,070,230

    说明

    结论

    2.4 图综合性能测试-CW

    数据库规模1000规模5000规模10000规模20000
    Titan45.943849.1682737.1179791.46
    Memory(core)41.0771825.905**
    Cassandra(core)39.783862.7442423.1366564.191
    RocksDB(core)33.383199.894763.8691677.813

    说明

    结论

    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    + Print entire section

    1 测试环境

    1.1 硬件信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD

    1.2 软件信息

    1.2.1 测试用例

    测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:

    1.2.2 测试数据集

    测试使用人造数据和真实数据

    本测试用到的数据集规模
    名称vertex数目edge数目文件大小
    email-enron.txt36,691367,6614MB
    com-youtube.ungraph.txt1,157,8062,987,62438.7MB
    amazon0601.txt403,3933,387,38847.9MB

    1.3 服务配置

    graphdb-benchmark适配的Titan版本为0.5.4

    2 测试结果

    2.1 Batch插入性能

    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan9.51688.123111.586
    RocksDB2.34514.07616.636
    Cassandra11.930108.709101.959
    Memory3.07715.20413.841

    说明

    结论

    2.2 遍历性能

    2.2.1 术语说明
    2.2.2 FN性能
    Backendemail-enron(3.6w)amazon0601(40w)com-youtube.ungraph(120w)
    Titan7.72470.935128.884
    RocksDB8.87665.85263.388
    Cassandra13.125126.959102.580
    Memory22.309207.411165.609

    说明

    2.2.3 FA性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan7.11963.353115.633
    RocksDB6.03264.52652.721
    Cassandra9.410102.76694.197
    Memory12.340195.444140.89

    说明

    结论

    2.3 HugeGraph-图常用分析方法性能

    术语说明
    FS性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)
    Titan11.3330.313376.06
    RocksDB44.3912.221268.792
    Cassandra39.8453.337331.113
    Memory35.6382.059388.987

    说明

    结论
    K-neighbor性能
    顶点深度一度二度三度四度五度六度
    v1时间0.031s0.033s0.048s0.500s11.27sOOM
    v111时间0.027s0.034s0.1151.36sOOM
    v1111时间0.039s0.027s0.052s0.511s10.96sOOM

    说明

    K-out性能
    顶点深度一度二度三度四度五度六度
    v1时间0.054s0.057s0.109s0.526s3.77sOOM
    10133245350,8301,128,688
    v111时间0.032s0.042s0.136s1.25s20.62sOOM
    1021149441131502,629,970
    v1111时间0.039s0.045s0.053s1.10s2.92sOOM
    101402555508251,070,230

    说明

    结论

    2.4 图综合性能测试-CW

    数据库规模1000规模5000规模10000规模20000
    Titan45.943849.1682737.1179791.46
    Memory(core)41.0771825.905**
    Cassandra(core)39.783862.7442423.1366564.191
    RocksDB(core)33.383199.894763.8691677.813

    说明

    结论

    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    diff --git a/docs/performance/hugegraph-benchmark-0.5.6/index.html b/docs/performance/hugegraph-benchmark-0.5.6/index.html index 1f84b2dcc..fe569cdb9 100644 --- a/docs/performance/hugegraph-benchmark-0.5.6/index.html +++ b/docs/performance/hugegraph-benchmark-0.5.6/index.html @@ -61,7 +61,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph BenchMark Performance

    1 测试环境

    1.1 硬件信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD

    1.2 软件信息

    1.2.1 测试用例

    测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:

    1.2.2 测试数据集

    测试使用人造数据和真实数据

    本测试用到的数据集规模
    名称vertex数目edge数目文件大小
    email-enron.txt36,691367,6614MB
    com-youtube.ungraph.txt1,157,8062,987,62438.7MB
    amazon0601.txt403,3933,387,38847.9MB
    com-lj.ungraph.txt399796134681189479MB

    1.3 服务配置

    graphdb-benchmark适配的Titan版本为0.5.4

    2 测试结果

    2.1 Batch插入性能

    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)com-lj.ungraph(3000w)
    HugeGraph0.6295.7115.24367.033
    Titan10.15108.569150.2661217.944
    Neo4j3.88418.93824.890281.537

    说明

    结论

    2.2 遍历性能

    2.2.1 术语说明
    2.2.2 FN性能
    Backendemail-enron(3.6w)amazon0601(40w)com-youtube.ungraph(120w)com-lj.ungraph(400w)
    HugeGraph4.07245.11866.006609.083
    Titan8.08492.507184.5431099.371
    Neo4j2.42410.53711.609106.919

    说明

    2.2.3 FA性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)com-lj.ungraph(3000w)
    HugeGraph1.54010.76411.243151.271
    Titan7.36193.344169.2181085.235
    Neo4j1.6734.7754.28440.507

    说明

    结论

    2.3 HugeGraph-图常用分析方法性能

    术语说明
    FS性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)com-lj.ungraph(3000w)
    HugeGraph0.4940.1033.3648.155
    Titan11.8180.239377.709575.678
    Neo4j1.7191.8001.9568.530

    说明

    结论
    K-neighbor性能
    顶点深度一度二度三度四度五度六度
    v1时间0.031s0.033s0.048s0.500s11.27sOOM
    v111时间0.027s0.034s0.1151.36sOOM
    v1111时间0.039s0.027s0.052s0.511s10.96sOOM

    说明

    K-out性能
    顶点深度一度二度三度四度五度六度
    v1时间0.054s0.057s0.109s0.526s3.77sOOM
    10133245350,8301,128,688
    v111时间0.032s0.042s0.136s1.25s20.62sOOM
    1021149441131502,629,970
    v1111时间0.039s0.045s0.053s1.10s2.92sOOM
    101402555508251,070,230

    说明

    结论

    2.4 图综合性能测试-CW

    数据库规模1000规模5000规模10000规模20000
    HugeGraph(core)20.804242.099744.7801700.547
    Titan45.790820.6332652.2359568.623
    Neo4j5.91350.267142.354460.880

    说明

    结论

    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    + Print entire section

    HugeGraph BenchMark Performance

    1 测试环境

    1.1 硬件信息

    CPUMemory网卡磁盘
    48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz128G10000Mbps750GB SSD

    1.2 软件信息

    1.2.1 测试用例

    测试使用graphdb-benchmark,一个图数据库测试集。该测试集主要包含4类测试:

    1.2.2 测试数据集

    测试使用人造数据和真实数据

    本测试用到的数据集规模
    名称vertex数目edge数目文件大小
    email-enron.txt36,691367,6614MB
    com-youtube.ungraph.txt1,157,8062,987,62438.7MB
    amazon0601.txt403,3933,387,38847.9MB
    com-lj.ungraph.txt399796134681189479MB

    1.3 服务配置

    graphdb-benchmark适配的Titan版本为0.5.4

    2 测试结果

    2.1 Batch插入性能

    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)com-lj.ungraph(3000w)
    HugeGraph0.6295.7115.24367.033
    Titan10.15108.569150.2661217.944
    Neo4j3.88418.93824.890281.537

    说明

    结论

    2.2 遍历性能

    2.2.1 术语说明
    2.2.2 FN性能
    Backendemail-enron(3.6w)amazon0601(40w)com-youtube.ungraph(120w)com-lj.ungraph(400w)
    HugeGraph4.07245.11866.006609.083
    Titan8.08492.507184.5431099.371
    Neo4j2.42410.53711.609106.919

    说明

    2.2.3 FA性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)com-lj.ungraph(3000w)
    HugeGraph1.54010.76411.243151.271
    Titan7.36193.344169.2181085.235
    Neo4j1.6734.7754.28440.507

    说明

    结论

    2.3 HugeGraph-图常用分析方法性能

    术语说明
    FS性能
    Backendemail-enron(30w)amazon0601(300w)com-youtube.ungraph(300w)com-lj.ungraph(3000w)
    HugeGraph0.4940.1033.3648.155
    Titan11.8180.239377.709575.678
    Neo4j1.7191.8001.9568.530

    说明

    结论
    K-neighbor性能
    顶点深度一度二度三度四度五度六度
    v1时间0.031s0.033s0.048s0.500s11.27sOOM
    v111时间0.027s0.034s0.1151.36sOOM
    v1111时间0.039s0.027s0.052s0.511s10.96sOOM

    说明

    K-out性能
    顶点深度一度二度三度四度五度六度
    v1时间0.054s0.057s0.109s0.526s3.77sOOM
    10133245350,8301,128,688
    v111时间0.032s0.042s0.136s1.25s20.62sOOM
    1021149441131502,629,970
    v1111时间0.039s0.045s0.053s1.10s2.92sOOM
    101402555508251,070,230

    说明

    结论

    2.4 图综合性能测试-CW

    数据库规模1000规模5000规模10000规模20000
    HugeGraph(core)20.804242.099744.7801700.547
    Titan45.790820.6332652.2359568.623
    Neo4j5.91350.267142.354460.880

    说明

    结论

    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    diff --git a/docs/performance/hugegraph-loader-performance/index.html b/docs/performance/hugegraph-loader-performance/index.html index 4f8e495b8..a53304dd0 100644 --- a/docs/performance/hugegraph-loader-performance/index.html +++ b/docs/performance/hugegraph-loader-performance/index.html @@ -19,7 +19,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph-Loader Performance

    使用场景

    当要批量插入的图数据(包括顶点和边)条数为billion级别及以下,或者总数据量小于TB时,可以采用HugeGraph-Loader工具持续、高速导入图数据

    性能

    测试均采用网址数据的边数据

    RocksDB单机性能

    Cassandra集群性能


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    HugeGraph-Loader Performance

    使用场景

    当要批量插入的图数据(包括顶点和边)条数为billion级别及以下,或者总数据量小于TB时,可以采用HugeGraph-Loader工具持续、高速导入图数据

    性能

    测试均采用网址数据的边数据

    RocksDB单机性能

    Cassandra集群性能


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/performance/index.html b/docs/performance/index.html index 405c74d47..2eb5dbcc5 100644 --- a/docs/performance/index.html +++ b/docs/performance/index.html @@ -4,7 +4,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    PERFORMANCE


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    PERFORMANCE


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/quickstart/_print/index.html b/docs/quickstart/_print/index.html index 09c7cf37b..080a38b19 100644 --- a/docs/quickstart/_print/index.html +++ b/docs/quickstart/_print/index.html @@ -1340,7 +1340,7 @@ # NOTE: diagnostic log exist only when the job fails, and it will only be saved for one hour. kubectl get event --field-selector reason=ComputerJobFailed --field-selector involvedObject.name=pagerank-sample -n hugegraph-computer-system

    2.2.8 Show success event of a job

    NOTE: it will only be saved for one hour

    kubectl get event --field-selector reason=ComputerJobSucceed --field-selector involvedObject.name=pagerank-sample -n hugegraph-computer-system
    -

    2.2.9 Query algorithm results

    If the output to Hugegraph-Server is consistent with Locally, if output to HDFS, please check the result file in the directory of /hugegraph-computer/results/{jobId} directory.

    3 Built-In algorithms document

    3.1 Supported algorithms list:

    Centrality Algorithm:
    Community Algorithm:
    Path Algorithm:

    More algorithms please see: Built-In algorithms

    3.2 Algorithm describe

    TODO

    4 Algorithm development guide

    TODO

    +

    2.2.9 Query algorithm results

    If the output to Hugegraph-Server is consistent with Locally, if output to HDFS, please check the result file in the directory of /hugegraph-computer/results/{jobId} directory.

    3 Built-In algorithms document

    3.1 Supported algorithms list:

    Centrality Algorithm:
    Community Algorithm:
    Path Algorithm:

    More algorithms please see: Built-In algorithms

    3.2 Algorithm describe

    TODO

    4 Algorithm development guide

    TODO

    diff --git a/docs/quickstart/hugegraph-client/index.html b/docs/quickstart/hugegraph-client/index.html index cefbacfb2..b7b067110 100644 --- a/docs/quickstart/hugegraph-client/index.html +++ b/docs/quickstart/hugegraph-client/index.html @@ -297,7 +297,7 @@ hugeClient.close(); } } -

    4.4 Run The Example

    Before running Example, you need to start the Server. For the startup process, seeHugeGraph-Server Quick Start.

    4.5 More Information About Example

    SeeIntroduce basic API of HugeGraph-Client.


    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    +

    4.4 Run The Example

    Before running Example, you need to start the Server. For the startup process, seeHugeGraph-Server Quick Start.

    4.5 More Information About Example

    SeeIntroduce basic API of HugeGraph-Client.


    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    diff --git a/docs/quickstart/hugegraph-computer/index.html b/docs/quickstart/hugegraph-computer/index.html index e9facedf0..1c00b4c7c 100644 --- a/docs/quickstart/hugegraph-computer/index.html +++ b/docs/quickstart/hugegraph-computer/index.html @@ -70,7 +70,7 @@ # NOTE: diagnostic log exist only when the job fails, and it will only be saved for one hour. kubectl get event --field-selector reason=ComputerJobFailed --field-selector involvedObject.name=pagerank-sample -n hugegraph-computer-system

    2.2.8 Show success event of a job

    NOTE: it will only be saved for one hour

    kubectl get event --field-selector reason=ComputerJobSucceed --field-selector involvedObject.name=pagerank-sample -n hugegraph-computer-system
    -

    2.2.9 Query algorithm results

    If the output to Hugegraph-Server is consistent with Locally, if output to HDFS, please check the result file in the directory of /hugegraph-computer/results/{jobId} directory.

    3 Built-In algorithms document

    3.1 Supported algorithms list:

    Centrality Algorithm:
    Community Algorithm:
    Path Algorithm:

    More algorithms please see: Built-In algorithms

    3.2 Algorithm describe

    TODO

    4 Algorithm development guide

    TODO


    Last modified November 28, 2022: improve computer doc (#157) (862b048)
    +

    2.2.9 Query algorithm results

    If the output to Hugegraph-Server is consistent with Locally, if output to HDFS, please check the result file in the directory of /hugegraph-computer/results/{jobId} directory.

    3 Built-In algorithms document

    3.1 Supported algorithms list:

    Centrality Algorithm:
    Community Algorithm:
    Path Algorithm:

    More algorithms please see: Built-In algorithms

    3.2 Algorithm describe

    TODO

    4 Algorithm development guide

    TODO


    Last modified November 28, 2022: improve computer doc (#157) (862b048)
    diff --git a/docs/quickstart/hugegraph-hubble/index.html b/docs/quickstart/hugegraph-hubble/index.html index 85c5c1ada..cc785d534 100644 --- a/docs/quickstart/hugegraph-hubble/index.html +++ b/docs/quickstart/hugegraph-hubble/index.html @@ -5,7 +5,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph-Hubble Quick Start

    1 HugeGraph-Hubble Overview

    HugeGraph is an analysis-oriented graph database system that supports batch operations, which fully supports Apache TinkerPop3 framework and Gremlin graph query language. It provides a complete tool chain ecology such as export, backup, and recovery, and effectively solve the storage, query and correlation analysis needs of massive graph data. HugeGraph is widely used in the fields of risk control, insurance claims, recommendation search, public security crime crackdown, knowledge graph, network security, IT operation and maintenance of bank securities companies, and is committed to allowing more industries, organizations and users to enjoy a wider range of data comprehensive value.

    HugeGraph-Hubble is HugeGraph’s one-stop visual analysis platform. The platform covers the whole process from data modeling, to efficient data import, to real-time and offline analysis of data, and unified management of graphs, realizing the whole process wizard of graph application. It is designed to improve the user’s use fluency, lower the user’s use threshold, and provide a more efficient and easy-to-use user experience.

    The platform mainly includes the following modules:

    Graph Management

    The graph management module realizes the unified management of multiple graphs and graph access, editing, deletion, and query by creating graph and connecting the platform and graph data.

    Metadata Modeling

    The metadata modeling module realizes the construction and management of graph models by creating attribute libraries, vertex types, edge types, and index types. The platform provides two modes, list mode and graph mode, which can display the metadata model in real time, which is more intuitive. At the same time, it also provides a metadata reuse function across graphs, which saves the tedious and repetitive creation process of the same metadata, greatly improves modeling efficiency and enhances ease of use.

    Data Import

    Data import is to convert the user’s business data into the vertices and edges of the graph and insert it into the graph database. The platform provides a wizard-style visual import module. By creating import tasks, the management of import tasks and the parallel operation of multiple import tasks are realized. Improve import performance. After entering the import task, you only need to follow the platform step prompts, upload files as needed, and fill in the content to easily implement the import process of graph data. At the same time, it supports breakpoint resuming, error retry mechanism, etc., which reduces import costs and improves efficiency.

    Graph Analysis

    By inputting the graph traversal language Gremlin, high-performance general analysis of graph data can be realized, and functions such as customized multidimensional path query of vertices can be provided, and three kinds of graph result display methods are provided, including: graph form, table form, Json form, and multidimensional display. The data form meets the needs of various scenarios used by users. It provides functions such as running records and collection of common statements, realizing the traceability of graph operations, and the reuse and sharing of query input, which is fast and efficient. It supports the export of graph data, and the export format is Json format.

    Task Management

    For Gremlin tasks that need to traverse the whole graph, index creation and reconstruction and other time-consuming asynchronous tasks, the platform provides corresponding task management functions to achieve unified management and result viewing of asynchronous tasks.

    2 Platform Workflow

    The module usage process of the platform is as follows:

    image

    3 Platform Instructions

    3.1 Graph Management

    3.1.1 Graph creation

    Under the graph management module, click [Create graph], and realize the connection of multiple graphs by filling in the graph ID, graph name, host name, port number, username, and password information.

    image

    Create graph by filling in the content as follows::

    image
    3.1.2 Graph Access

    Realize the information access of the graph space. After entering, you can perform operations such as multidimensional query analysis, metadata management, data import, and algorithm analysis of the graph.

    image
    3.1.3 Graph management
    1. Users can achieve unified management of graphs through overview, search, and information editing and deletion of single graphs.
    2. Search range: You can search for the graph name and ID.
    image

    3.2 Metadata Modeling (list + graph mode)

    3.2.1 Module entry

    Left navigation:

    image
    3.2.2 Property type
    3.2.2.1 Create type
    1. Fill in or select the attribute name, data type, and cardinality to complete the creation of the attribute.
    2. Created attributes can be used as attributes of vertex type and edge type.

    List mode:

    image

    Graph mode:

    image
    3.2.2.2 Reuse
    1. The platform provides the [Reuse] function, which can directly reuse the metadata of other graphs.
    2. Select the graph ID that needs to be reused, and continue to select the attributes that need to be reused. After that, the platform will check whether there is a conflict. After passing, the metadata can be reused.

    Select reuse items:

    image

    Check reuse items:

    image
    3.2.2.3 Management
    1. You can delete a single item or delete it in batches in the attribute list.
    3.2.3 Vertex type
    3.2.3.1 Create type
    1. Fill in or select the vertex type name, ID strategy, association attribute, primary key attribute, vertex style, content displayed below the vertex in the query result, and index information: including whether to create a type index, and the specific content of the attribute index, complete the vertex Type creation.

    List mode:

    image

    Graph mode:

    image
    3.2.3.2 Reuse
    1. The multiplexing of vertex types will reuse the attributes and attribute indexes associated with this type together.
    2. The reuse method is similar to the property reuse, see 3.2.2.2.
    3.2.3.3 Administration
    1. Editing operations are available. The vertex style, association type, vertex display content, and attribute index can be edited, and the rest cannot be edited.

    2. You can delete a single item or delete it in batches.

    image
    3.2.4 Edge Types
    3.2.4.1 Create
    1. Fill in or select the edge type name, start point type, end point type, associated attributes, whether to allow multiple connections, edge style, content displayed below the edge in the query result, and index information: including whether to create a type index, and attribute index The specific content, complete the creation of the edge type.

    List mode:

    image

    Graph mode:

    image
    3.2.4.2 Reuse
    1. The reuse of the edge type will reuse the start point type, end point type, associated attribute and attribute index of this type.
    2. The reuse method is similar to the property reuse, see 3.2.2.2.
    3.2.4.3 Administration
    1. Editing operations are available. Edge styles, associated attributes, edge display content, and attribute indexes can be edited, and the rest cannot be edited, the same as the vertex type.
    2. You can delete a single item or delete it in batches.
    3.2.5 Index Types

    Displays vertex and edge indices for vertex types and edge types.

    3.3 Data Import

    The usage process of data import is as follows:

    image
    3.3.1 Module entrance

    Left navigation:

    image
    3.3.2 Create task
    1. Fill in the task name and remarks (optional) to create an import task.
    2. Multiple import tasks can be created and imported in parallel.
    image
    3.3.3 Uploading files
    1. Upload the file that needs to be composed. The currently supported format is CSV, which will be updated continuously in the future.
    2. Multiple files can be uploaded at the same time.
    image
    3.3.4 Setting up data mapping
    1. Set up data mapping for uploaded files, including file settings and type settings

    2. File settings: Check or fill in whether to include the header, separator, encoding format and other settings of the file itself, all set the default values, no need to fill in manually

    3. Type setting:

      1. Vertex map and edge map:

        【Vertex Type】: Select the vertex type, and upload the column data in the file for its ID mapping;

        【Edge Type】: Select the edge type and map the column data of the uploaded file to the ID column of its start point type and end point type;

      2. Mapping settings: upload the column data in the file for the attribute mapping of the selected vertex type. Here, if the attribute name is the same as the header name of the file, the mapping attribute can be automatically matched, and there is no need to manually fill in the selection.

      3. After completing the setting, the setting list will be displayed before proceeding to the next step. It supports the operations of adding, editing and deleting mappings.

    Fill in the settings map:

    image

    Mapping list:

    image
    3.3.5 Import data

    Before importing, you need to fill in the import setting parameters. After filling in, you can start importing data into the gallery.

    1. Import settings
    image
    1. Import details
    image

    3.4 Data Analysis

    3.4.1 Module entry

    Left navigation:

    image
    3.4.2 Multi-image switching

    By switching the entrance on the left, flexibly switch the operation space of multiple graphs

    image
    3.4.3 Graph Analysis and Processing

    HugeGraph supports Gremlin, a graph traversal query language of Apache TinkerPop3. Gremlin is a general graph database query language. By entering Gremlin statements and clicking execute, you can perform query and analysis operations on graph data, and create and delete vertices/edges. , vertex/edge attribute modification, etc.

    After Gremlin query, below is the graph result display area, which provides 3 kinds of graph result display modes: [Graph Mode], [Table Mode], [Json Mode].

    Support zoom, center, full screen, export and other operations.

    【Picture Mode】

    image

    【Table mode】

    image

    【Json mode】

    image
    3.4.4 Data Details

    Click the vertex/edge entity to view the data details of the vertex/edge, including: vertex/edge type, vertex ID, attribute and corresponding value, expand the information display dimension of the graph, and improve the usability.

    3.4.5 Multidimensional Path Query of Graph Results

    In addition to the global query, in-depth customized query and hidden operations can be performed for the vertices in the query result to realize customized mining of graph results.

    Right-click a vertex, and the menu entry of the vertex appears, which can be displayed, inquired, hidden, etc.

    Double-clicking a vertex also displays the vertex associated with the selected point.

    image
    3.4.6 Add vertex/edge
    3.4.6.1 Added vertex

    In the graph area, two entries can be used to dynamically add vertices, as follows:

    1. Click on the graph area panel, the Add Vertex entry appears
    2. Click the first icon in the action bar in the upper right corner

    Complete the addition of vertices by selecting or filling in the vertex type, ID value, and attribute information.

    The entry is as follows:

    image

    Add the vertex content as follows:

    image
    3.4.6.2 Add edge

    Right-click a vertex in the graph result to add the outgoing or incoming edge of that point.

    3.4.7 Execute the query of records and favorites
    1. Record each query record at the bottom of the graph area, including: query time, execution type, content, status, time-consuming, as well as [collection] and [load] operations, to achieve a comprehensive record of graph execution, with traces to follow, and Can quickly load and reuse execution content
    2. Provides the function of collecting sentences, which can be used to collect frequently used sentences, which is convenient for fast calling of high-frequency sentences.
    image

    3.5 Task Management

    3.5.1 Module entry

    Left navigation:

    image
    3.5.2 Task Management
    1. Provide unified management and result viewing of asynchronous tasks. There are 4 types of asynchronous tasks, namely:
    1. The list displays the asynchronous task information of the current graph, including: task ID, task name, task type, creation time, time-consuming, status, operation, and realizes the management of asynchronous tasks.
    2. Support filtering by task type and status
    3. Support searching for task ID and task name
    4. Asynchronous tasks can be deleted or deleted in batches
    image
    3.5.3 Gremlin asynchronous tasks
    1. Create a task
    1. Task submission
    1. Mission details
    image

    Click to view the entry to jump to the task management list, as follows:

    image
    1. View the results
    3.5.4 OLAP algorithm tasks

    There is no visual OLAP algorithm execution on Hubble. You can call the RESTful API to perform OLAP algorithm tasks, find the corresponding tasks by ID in the task management, and view the progress and results.

    3.5.5 Delete metadata, rebuild index
    1. Create a task
    image
    image
    1. Task details
    image

    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    + Print entire section

    HugeGraph-Hubble Quick Start

    1 HugeGraph-Hubble Overview

    HugeGraph is an analysis-oriented graph database system that supports batch operations, which fully supports Apache TinkerPop3 framework and Gremlin graph query language. It provides a complete tool chain ecology such as export, backup, and recovery, and effectively solve the storage, query and correlation analysis needs of massive graph data. HugeGraph is widely used in the fields of risk control, insurance claims, recommendation search, public security crime crackdown, knowledge graph, network security, IT operation and maintenance of bank securities companies, and is committed to allowing more industries, organizations and users to enjoy a wider range of data comprehensive value.

    HugeGraph-Hubble is HugeGraph’s one-stop visual analysis platform. The platform covers the whole process from data modeling, to efficient data import, to real-time and offline analysis of data, and unified management of graphs, realizing the whole process wizard of graph application. It is designed to improve the user’s use fluency, lower the user’s use threshold, and provide a more efficient and easy-to-use user experience.

    The platform mainly includes the following modules:

    Graph Management

    The graph management module realizes the unified management of multiple graphs and graph access, editing, deletion, and query by creating graph and connecting the platform and graph data.

    Metadata Modeling

    The metadata modeling module realizes the construction and management of graph models by creating attribute libraries, vertex types, edge types, and index types. The platform provides two modes, list mode and graph mode, which can display the metadata model in real time, which is more intuitive. At the same time, it also provides a metadata reuse function across graphs, which saves the tedious and repetitive creation process of the same metadata, greatly improves modeling efficiency and enhances ease of use.

    Data Import

    Data import is to convert the user’s business data into the vertices and edges of the graph and insert it into the graph database. The platform provides a wizard-style visual import module. By creating import tasks, the management of import tasks and the parallel operation of multiple import tasks are realized. Improve import performance. After entering the import task, you only need to follow the platform step prompts, upload files as needed, and fill in the content to easily implement the import process of graph data. At the same time, it supports breakpoint resuming, error retry mechanism, etc., which reduces import costs and improves efficiency.

    Graph Analysis

    By inputting the graph traversal language Gremlin, high-performance general analysis of graph data can be realized, and functions such as customized multidimensional path query of vertices can be provided, and three kinds of graph result display methods are provided, including: graph form, table form, Json form, and multidimensional display. The data form meets the needs of various scenarios used by users. It provides functions such as running records and collection of common statements, realizing the traceability of graph operations, and the reuse and sharing of query input, which is fast and efficient. It supports the export of graph data, and the export format is Json format.

    Task Management

    For Gremlin tasks that need to traverse the whole graph, index creation and reconstruction and other time-consuming asynchronous tasks, the platform provides corresponding task management functions to achieve unified management and result viewing of asynchronous tasks.

    2 Platform Workflow

    The module usage process of the platform is as follows:

    image

    3 Platform Instructions

    3.1 Graph Management

    3.1.1 Graph creation

    Under the graph management module, click [Create graph], and realize the connection of multiple graphs by filling in the graph ID, graph name, host name, port number, username, and password information.

    image

    Create graph by filling in the content as follows::

    image
    3.1.2 Graph Access

    Realize the information access of the graph space. After entering, you can perform operations such as multidimensional query analysis, metadata management, data import, and algorithm analysis of the graph.

    image
    3.1.3 Graph management
    1. Users can achieve unified management of graphs through overview, search, and information editing and deletion of single graphs.
    2. Search range: You can search for the graph name and ID.
    image

    3.2 Metadata Modeling (list + graph mode)

    3.2.1 Module entry

    Left navigation:

    image
    3.2.2 Property type
    3.2.2.1 Create type
    1. Fill in or select the attribute name, data type, and cardinality to complete the creation of the attribute.
    2. Created attributes can be used as attributes of vertex type and edge type.

    List mode:

    image

    Graph mode:

    image
    3.2.2.2 Reuse
    1. The platform provides the [Reuse] function, which can directly reuse the metadata of other graphs.
    2. Select the graph ID that needs to be reused, and continue to select the attributes that need to be reused. After that, the platform will check whether there is a conflict. After passing, the metadata can be reused.

    Select reuse items:

    image

    Check reuse items:

    image
    3.2.2.3 Management
    1. You can delete a single item or delete it in batches in the attribute list.
    3.2.3 Vertex type
    3.2.3.1 Create type
    1. Fill in or select the vertex type name, ID strategy, association attribute, primary key attribute, vertex style, content displayed below the vertex in the query result, and index information: including whether to create a type index, and the specific content of the attribute index, complete the vertex Type creation.

    List mode:

    image

    Graph mode:

    image
    3.2.3.2 Reuse
    1. The multiplexing of vertex types will reuse the attributes and attribute indexes associated with this type together.
    2. The reuse method is similar to the property reuse, see 3.2.2.2.
    3.2.3.3 Administration
    1. Editing operations are available. The vertex style, association type, vertex display content, and attribute index can be edited, and the rest cannot be edited.

    2. You can delete a single item or delete it in batches.

    image
    3.2.4 Edge Types
    3.2.4.1 Create
    1. Fill in or select the edge type name, start point type, end point type, associated attributes, whether to allow multiple connections, edge style, content displayed below the edge in the query result, and index information: including whether to create a type index, and attribute index The specific content, complete the creation of the edge type.

    List mode:

    image

    Graph mode:

    image
    3.2.4.2 Reuse
    1. The reuse of the edge type will reuse the start point type, end point type, associated attribute and attribute index of this type.
    2. The reuse method is similar to the property reuse, see 3.2.2.2.
    3.2.4.3 Administration
    1. Editing operations are available. Edge styles, associated attributes, edge display content, and attribute indexes can be edited, and the rest cannot be edited, the same as the vertex type.
    2. You can delete a single item or delete it in batches.
    3.2.5 Index Types

    Displays vertex and edge indices for vertex types and edge types.

    3.3 Data Import

    The usage process of data import is as follows:

    image
    3.3.1 Module entrance

    Left navigation:

    image
    3.3.2 Create task
    1. Fill in the task name and remarks (optional) to create an import task.
    2. Multiple import tasks can be created and imported in parallel.
    image
    3.3.3 Uploading files
    1. Upload the file that needs to be composed. The currently supported format is CSV, which will be updated continuously in the future.
    2. Multiple files can be uploaded at the same time.
    image
    3.3.4 Setting up data mapping
    1. Set up data mapping for uploaded files, including file settings and type settings

    2. File settings: Check or fill in whether to include the header, separator, encoding format and other settings of the file itself, all set the default values, no need to fill in manually

    3. Type setting:

      1. Vertex map and edge map:

        【Vertex Type】: Select the vertex type, and upload the column data in the file for its ID mapping;

        【Edge Type】: Select the edge type and map the column data of the uploaded file to the ID column of its start point type and end point type;

      2. Mapping settings: upload the column data in the file for the attribute mapping of the selected vertex type. Here, if the attribute name is the same as the header name of the file, the mapping attribute can be automatically matched, and there is no need to manually fill in the selection.

      3. After completing the setting, the setting list will be displayed before proceeding to the next step. It supports the operations of adding, editing and deleting mappings.

    Fill in the settings map:

    image

    Mapping list:

    image
    3.3.5 Import data

    Before importing, you need to fill in the import setting parameters. After filling in, you can start importing data into the gallery.

    1. Import settings
    image
    1. Import details
    image

    3.4 Data Analysis

    3.4.1 Module entry

    Left navigation:

    image
    3.4.2 Multi-image switching

    By switching the entrance on the left, flexibly switch the operation space of multiple graphs

    image
    3.4.3 Graph Analysis and Processing

    HugeGraph supports Gremlin, a graph traversal query language of Apache TinkerPop3. Gremlin is a general graph database query language. By entering Gremlin statements and clicking execute, you can perform query and analysis operations on graph data, and create and delete vertices/edges. , vertex/edge attribute modification, etc.

    After Gremlin query, below is the graph result display area, which provides 3 kinds of graph result display modes: [Graph Mode], [Table Mode], [Json Mode].

    Support zoom, center, full screen, export and other operations.

    【Picture Mode】

    image

    【Table mode】

    image

    【Json mode】

    image
    3.4.4 Data Details

    Click the vertex/edge entity to view the data details of the vertex/edge, including: vertex/edge type, vertex ID, attribute and corresponding value, expand the information display dimension of the graph, and improve the usability.

    3.4.5 Multidimensional Path Query of Graph Results

    In addition to the global query, in-depth customized query and hidden operations can be performed for the vertices in the query result to realize customized mining of graph results.

    Right-click a vertex, and the menu entry of the vertex appears, which can be displayed, inquired, hidden, etc.

    Double-clicking a vertex also displays the vertex associated with the selected point.

    image
    3.4.6 Add vertex/edge
    3.4.6.1 Added vertex

    In the graph area, two entries can be used to dynamically add vertices, as follows:

    1. Click on the graph area panel, the Add Vertex entry appears
    2. Click the first icon in the action bar in the upper right corner

    Complete the addition of vertices by selecting or filling in the vertex type, ID value, and attribute information.

    The entry is as follows:

    image

    Add the vertex content as follows:

    image
    3.4.6.2 Add edge

    Right-click a vertex in the graph result to add the outgoing or incoming edge of that point.

    3.4.7 Execute the query of records and favorites
    1. Record each query record at the bottom of the graph area, including: query time, execution type, content, status, time-consuming, as well as [collection] and [load] operations, to achieve a comprehensive record of graph execution, with traces to follow, and Can quickly load and reuse execution content
    2. Provides the function of collecting sentences, which can be used to collect frequently used sentences, which is convenient for fast calling of high-frequency sentences.
    image

    3.5 Task Management

    3.5.1 Module entry

    Left navigation:

    image
    3.5.2 Task Management
    1. Provide unified management and result viewing of asynchronous tasks. There are 4 types of asynchronous tasks, namely:
    1. The list displays the asynchronous task information of the current graph, including: task ID, task name, task type, creation time, time-consuming, status, operation, and realizes the management of asynchronous tasks.
    2. Support filtering by task type and status
    3. Support searching for task ID and task name
    4. Asynchronous tasks can be deleted or deleted in batches
    image
    3.5.3 Gremlin asynchronous tasks
    1. Create a task
    1. Task submission
    1. Mission details
    image

    Click to view the entry to jump to the task management list, as follows:

    image
    1. View the results
    3.5.4 OLAP algorithm tasks

    There is no visual OLAP algorithm execution on Hubble. You can call the RESTful API to perform OLAP algorithm tasks, find the corresponding tasks by ID in the task management, and view the progress and results.

    3.5.5 Delete metadata, rebuild index
    1. Create a task
    image
    image
    1. Task details
    image

    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    diff --git a/docs/quickstart/hugegraph-loader/index.html b/docs/quickstart/hugegraph-loader/index.html index dcdbc4af8..85253f305 100644 --- a/docs/quickstart/hugegraph-loader/index.html +++ b/docs/quickstart/hugegraph-loader/index.html @@ -481,7 +481,7 @@ --deploy-mode cluster --name spark-hugegraph-loader --file ./hugegraph.json \ --username admin --token admin --host xx.xx.xx.xx --port 8093 \ --graph graph-test --num-executors 6 --executor-cores 16 --executor-memory 15g -
    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    +
    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    diff --git a/docs/quickstart/hugegraph-server/index.html b/docs/quickstart/hugegraph-server/index.html index 578c52057..83cfe7d0f 100644 --- a/docs/quickstart/hugegraph-server/index.html +++ b/docs/quickstart/hugegraph-server/index.html @@ -182,7 +182,7 @@ }

    For detailed API, please refer toRESTful-API

    7 Stop Server

    $cd hugegraph-${version}
     $bin/stop-hugegraph.sh
    -

    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    +
    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    diff --git a/docs/quickstart/hugegraph-tools/index.html b/docs/quickstart/hugegraph-tools/index.html index 312f66e85..4517a6a50 100644 --- a/docs/quickstart/hugegraph-tools/index.html +++ b/docs/quickstart/hugegraph-tools/index.html @@ -383,7 +383,7 @@ # 恢复图模式 ./bin/hugegraph --url http://127.0.0.1:8080 --graph hugegraph graph-mode-set -m NONE
    8. 图迁移
    ./bin/hugegraph --url http://127.0.0.1:8080 --graph hugegraph migrate --target-url http://127.0.0.1:8090 --target-graph hugegraph
    -

    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    +
    Last modified September 15, 2022: add rank api & fix typo (06499b0)
    diff --git a/docs/quickstart/index.html b/docs/quickstart/index.html index 908082b00..6d2bba88b 100644 --- a/docs/quickstart/index.html +++ b/docs/quickstart/index.html @@ -4,7 +4,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    Quick Start


    Last modified April 17, 2022: rebuild doc (ef36544)
    + Print entire section

    Quick Start


    Last modified April 17, 2022: rebuild doc (ef36544)
    diff --git a/docs/summary/index.html b/docs/summary/index.html index 59a16558b..1d7baf214 100644 --- a/docs/summary/index.html +++ b/docs/summary/index.html @@ -13,7 +13,7 @@ Create child page Create documentation issue Create project issue - Print entire section

    HugeGraph Docs

    Quickstart

    Config

    API

    Guides

    Query Language

    Performance

    ChangeLogs


    Last modified November 27, 2022: Add HugeGraph-Computer Doc (#155) (19ab2ff)
    + Print entire section

    HugeGraph Docs

    Quickstart

    Config

    API

    Guides

    Query Language

    Performance

    ChangeLogs


    Last modified November 27, 2022: Add HugeGraph-Computer Doc (#155) (19ab2ff)
    diff --git a/en/sitemap.xml b/en/sitemap.xml index 124260219..18e6e45bd 100644 --- a/en/sitemap.xml +++ b/en/sitemap.xml @@ -1 +1 @@ -/docs/guides/architectural/2022-11-27T21:05:55+08:00/docs/config/config-guide/2022-04-17T11:36:55+08:00/docs/language/hugegraph-gremlin/2022-09-15T12:59:59+08:00/docs/contribution-guidelines/contribute/2022-09-15T12:59:59+08:00/docs/performance/hugegraph-benchmark-0.5.6/2022-09-15T12:59:59+08:00/docs/quickstart/hugegraph-server/2022-09-15T12:59:59+08:00/docs/introduction/readme/2022-11-27T21:44:37+08:00/docs/changelog/hugegraph-0.12.0-release-notes/2022-04-17T11:36:55+08:00/docs/clients/restful-api/2022-04-17T11:36:55+08:00/docs/clients/restful-api/schema/2022-04-17T11:36:55+08:00/docs/performance/api-preformance/hugegraph-api-0.5.6-rocksdb/2022-04-17T11:36:55+08:00/docs/config/config-option/2022-09-15T12:59:59+08:00/docs/guides/desgin-concept/2022-04-17T11:36:55+08:00/docs/download/download/2022-09-15T12:59:59+08:00/docs/language/hugegraph-example/2022-09-15T12:59:59+08:00/docs/clients/hugegraph-client/2022-09-15T12:59:59+08:00/docs/performance/api-preformance/2022-04-17T11:36:55+08:00/docs/quickstart/hugegraph-loader/2022-09-15T12:59:59+08:00/docs/clients/restful-api/propertykey/2022-05-12T21:24:05+08:00/docs/contribution-guidelines/subscribe/2022-09-15T12:59:59+08:00/docs/performance/api-preformance/hugegraph-api-0.5.6-cassandra/2022-04-17T11:36:55+08:00/docs/config/config-authentication/2022-04-17T11:36:55+08:00/docs/clients/gremlin-console/2022-05-25T21:16:41+08:00/docs/guides/custom-plugin/2022-09-15T12:59:59+08:00/docs/performance/hugegraph-loader-performance/2022-04-17T11:36:55+08:00/docs/quickstart/hugegraph-tools/2022-09-15T12:59:59+08:00/docs/quickstart/2022-04-17T11:36:55+08:00/docs/performance/api-preformance/hugegraph-api-0.4.4/2022-04-17T11:36:55+08:00/docs/clients/restful-api/vertexlabel/2022-04-17T11:36:55+08:00/docs/guides/backup-restore/2022-04-17T11:36:55+08:00/docs/config/2022-04-17T11:36:55+08:00/docs/config/config-https/2022-04-17T11:36:55+08:00/docs/clients/restful-api/edgelabel/2022-04-17T11:36:55+08:00/docs/performance/api-preformance/hugegraph-api-0.2/2022-04-17T11:36:55+08:00/docs/quickstart/hugegraph-hubble/2022-09-15T12:59:59+08:00/docs/clients/2022-04-17T11:36:55+08:00/docs/config/config-computer/2022-11-28T10:57:39+08:00/docs/guides/faq/2022-09-15T15:16:23+08:00/docs/clients/restful-api/indexlabel/2022-04-17T11:36:55+08:00/docs/quickstart/hugegraph-client/2022-09-15T12:59:59+08:00/docs/guides/2022-04-17T11:36:55+08:00/docs/clients/restful-api/rebuild/2022-05-09T18:43:53+08:00/docs/quickstart/hugegraph-computer/2022-11-28T10:57:39+08:00/docs/language/2022-04-17T11:36:55+08:00/docs/clients/restful-api/vertex/2022-09-15T15:16:23+08:00/docs/clients/restful-api/edge/2022-09-15T15:16:23+08:00/docs/performance/2022-04-17T11:36:55+08:00/docs/contribution-guidelines/2022-04-28T21:26:41+08:00/docs/clients/restful-api/traverser/2022-04-17T11:36:55+08:00/docs/changelog/2022-04-28T21:26:41+08:00/docs/clients/restful-api/rank/2022-09-15T12:59:59+08:00/docs/clients/restful-api/variable/2022-04-17T11:36:55+08:00/docs/clients/restful-api/graphs/2022-05-27T09:27:37+08:00/docs/clients/restful-api/task/2022-09-15T12:59:59+08:00/docs/clients/restful-api/gremlin/2022-04-17T11:36:55+08:00/docs/clients/restful-api/auth/2022-04-17T11:36:55+08:00/docs/clients/restful-api/other/2022-04-17T11:36:55+08:00/docs/2022-04-21T15:42:39+08:00/blog/news/2022-03-21T18:55:33+08:00/blog/releases/2022-03-21T18:55:33+08:00/blog/2018/10/06/easy-documentation-with-docsy/2022-03-21T18:55:33+08:00/blog/2018/10/06/the-second-blog-post/2022-03-21T18:55:33+08:00/blog/2018/01/04/another-great-release/2022-03-21T18:55:33+08:00/docs/cla/2022-03-21T19:51:14+08:00/docs/performance/hugegraph-benchmark-0.4.4/2022-09-15T12:59:59+08:00/docs/summary/2022-11-27T21:05:55+08:00/about/2022-04-21T15:42:39+08:00/blog/2022-03-21T18:55:33+08:00/categories//community/2022-03-21T18:55:33+08:00/2022-11-27T21:44:37+08:00/search/2022-03-21T18:55:33+08:00/tags/ \ No newline at end of file +/docs/guides/architectural/2022-11-27T21:05:55+08:00/docs/config/config-guide/2022-04-17T11:36:55+08:00/docs/language/hugegraph-gremlin/2022-09-15T12:59:59+08:00/docs/contribution-guidelines/contribute/2022-09-15T12:59:59+08:00/docs/performance/hugegraph-benchmark-0.5.6/2022-09-15T12:59:59+08:00/docs/quickstart/hugegraph-server/2022-09-15T12:59:59+08:00/docs/introduction/readme/2022-11-27T21:44:37+08:00/docs/changelog/hugegraph-0.12.0-release-notes/2022-04-17T11:36:55+08:00/docs/clients/restful-api/2022-04-17T11:36:55+08:00/docs/clients/restful-api/schema/2022-04-17T11:36:55+08:00/docs/performance/api-preformance/hugegraph-api-0.5.6-rocksdb/2022-04-17T11:36:55+08:00/docs/config/config-option/2022-09-15T12:59:59+08:00/docs/guides/desgin-concept/2022-04-17T11:36:55+08:00/docs/download/download/2022-09-15T12:59:59+08:00/docs/language/hugegraph-example/2022-09-15T12:59:59+08:00/docs/clients/hugegraph-client/2022-09-15T12:59:59+08:00/docs/performance/api-preformance/2022-04-17T11:36:55+08:00/docs/quickstart/hugegraph-loader/2022-09-15T12:59:59+08:00/docs/clients/restful-api/propertykey/2022-05-12T21:24:05+08:00/docs/contribution-guidelines/subscribe/2022-09-15T12:59:59+08:00/docs/performance/api-preformance/hugegraph-api-0.5.6-cassandra/2022-04-17T11:36:55+08:00/docs/config/config-authentication/2022-04-17T11:36:55+08:00/docs/clients/gremlin-console/2022-05-25T21:16:41+08:00/docs/guides/custom-plugin/2022-09-15T12:59:59+08:00/docs/performance/hugegraph-loader-performance/2022-04-17T11:36:55+08:00/docs/quickstart/hugegraph-tools/2022-09-15T12:59:59+08:00/docs/quickstart/2022-04-17T11:36:55+08:00/docs/performance/api-preformance/hugegraph-api-0.4.4/2022-04-17T11:36:55+08:00/docs/clients/restful-api/vertexlabel/2022-04-17T11:36:55+08:00/docs/guides/backup-restore/2022-04-17T11:36:55+08:00/docs/config/2022-04-17T11:36:55+08:00/docs/config/config-https/2022-04-17T11:36:55+08:00/docs/clients/restful-api/edgelabel/2022-04-17T11:36:55+08:00/docs/performance/api-preformance/hugegraph-api-0.2/2022-04-17T11:36:55+08:00/docs/quickstart/hugegraph-hubble/2022-09-15T12:59:59+08:00/docs/clients/2022-04-17T11:36:55+08:00/docs/config/config-computer/2022-11-28T10:57:39+08:00/docs/guides/faq/2022-09-15T15:16:23+08:00/docs/clients/restful-api/indexlabel/2022-04-17T11:36:55+08:00/docs/quickstart/hugegraph-client/2022-09-15T12:59:59+08:00/docs/guides/2022-04-17T11:36:55+08:00/docs/clients/restful-api/rebuild/2022-05-09T18:43:53+08:00/docs/quickstart/hugegraph-computer/2022-11-28T10:57:39+08:00/docs/language/2022-04-17T11:36:55+08:00/docs/clients/restful-api/vertex/2022-09-15T15:16:23+08:00/docs/clients/restful-api/edge/2022-09-15T15:16:23+08:00/docs/performance/2022-04-17T11:36:55+08:00/docs/contribution-guidelines/2022-04-28T21:26:41+08:00/docs/clients/restful-api/traverser/2022-04-17T11:36:55+08:00/docs/changelog/2022-04-28T21:26:41+08:00/docs/clients/restful-api/rank/2022-09-15T12:59:59+08:00/docs/clients/restful-api/variable/2022-04-17T11:36:55+08:00/docs/clients/restful-api/graphs/2022-05-27T09:27:37+08:00/docs/clients/restful-api/task/2022-09-15T12:59:59+08:00/docs/clients/restful-api/gremlin/2022-04-17T11:36:55+08:00/docs/clients/restful-api/auth/2022-04-17T11:36:55+08:00/docs/clients/restful-api/other/2022-04-17T11:36:55+08:00/docs/2022-04-21T15:42:39+08:00/blog/news/2022-03-21T18:55:33+08:00/blog/releases/2022-03-21T18:55:33+08:00/blog/2018/10/06/easy-documentation-with-docsy/2022-03-21T18:55:33+08:00/blog/2018/10/06/the-second-blog-post/2022-03-21T18:55:33+08:00/blog/2018/01/04/another-great-release/2022-03-21T18:55:33+08:00/docs/cla/2022-03-21T19:51:14+08:00/docs/performance/hugegraph-benchmark-0.4.4/2022-09-15T12:59:59+08:00/docs/summary/2022-11-27T21:05:55+08:00/about/2022-04-21T15:42:39+08:00/blog/2022-03-21T18:55:33+08:00/categories//community/2022-03-21T18:55:33+08:00/2022-12-12T18:18:56+08:00/search/2022-03-21T18:55:33+08:00/tags/ \ No newline at end of file diff --git a/index.html b/index.html index efad5420d..f14eccf9e 100644 --- a/index.html +++ b/index.html @@ -17,7 +17,7 @@

    Apache HugeGraph

                        Incubating

    Learn More -Download

    HugeGraph is a convenient, efficient, and adaptable graph database

    compatible with the Apache TinkerPop3 framework and the Gremlin query language.

    HugeGraph supports fast import performance in the case of more than 10 billion Vertices and Edges

    Graph,millisecond-level OLTP query capability, and large-scale distributed

    graph processing (OLAP). The main scenarios of HugeGraph include

    correlation search, fraud detection, and knowledge graph.

    Convenient

    Not only supports Gremlin graph query language and RESTful API but also provides commonly used graph algorithm APIs. To help users easily implement various queries and analyses, HugeGraph has a full range of accessory tools, such as supporting distributed storage, data replication, scaling horizontally, and supports many built-in backends of storage engines.

    Efficient

    Has been deeply optimized in graph storage and graph computation. It provides multiple batch import tools that can easily complete the fast-import of tens of billions of data, achieves millisecond-level response for graph retrieval through ameliorated queries, and supports concurrent online and real-time operations for thousands of users.

    Adaptable

    Adapts to the Apache Gremlin standard graph query language and the Property Graph standard modeling method, and both support graph-based OLTP and OLAP schemes. Furthermore, HugeGraph can be integrated with Hadoop and Spark’s big data platforms, and easily extend the back-end storage engine through plug-ins.

    The first graph database project in Apache

    Get The Toolchain

    It inlcudes graph loader & dashboard & backup tools

    Efficient

    We do a Pull Request contributions workflow on GitHub. New users are always welcome!

    Read more …

    Follow us on Wechat!

    Follow the official account “HugeGraph” to get the latest news

    PS: twitter account it’s on the way

    Read more …

    Welcome to the HugeGraph open source community!

    +Download

    HugeGraph is a convenient, efficient, and adaptable graph database

    compatible with the Apache TinkerPop3 framework and the Gremlin query language.

    HugeGraph supports fast import performance in the case of more than 10 billion Vertices and Edges

    Graph,millisecond-level OLTP query capability, and large-scale distributed

    graph processing (OLAP). The main scenarios of HugeGraph include

    correlation search, fraud detection, and knowledge graph.

    Convenient

    Not only supports Gremlin graph query language and RESTful API but also provides commonly used graph algorithm APIs. To help users easily implement various queries and analyses, HugeGraph has a full range of accessory tools, such as supporting distributed storage, data replication, scaling horizontally, and supports many built-in backends of storage engines.

    Efficient

    Has been deeply optimized in graph storage and graph computation. It provides multiple batch import tools that can easily complete the fast-import of tens of billions of data, achieves millisecond-level response for graph retrieval through ameliorated queries, and supports concurrent online and real-time operations for thousands of users.

    Adaptable

    Adapts to the Apache Gremlin standard graph query language and the Property Graph standard modeling method, and both support graph-based OLTP and OLAP schemes. Furthermore, HugeGraph can be integrated with Hadoop and Spark’s big data platforms, and easily extend the back-end storage engine through plug-ins.

    The first graph database project in Apache

    Get The Toolchain

    It inlcudes graph loader & dashboard & backup tools

    Efficient

    We do a Pull Request contributions workflow on GitHub. New users are always welcome!

    Read more …

    Follow us on Wechat!

    Follow the official account “HugeGraph” to get the latest news

    PS: twitter account it’s on the way

    Read more …

    Welcome to the HugeGraph open source community!

    Apache EventMesh is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.

    diff --git a/search/index.html b/search/index.html index b00e55222..da2846fec 100644 --- a/search/index.html +++ b/search/index.html @@ -1,5 +1,5 @@ Search Results | HugeGraph -

    Search Results

    +

    Search Results

    diff --git a/tags/index.html b/tags/index.html index 9d048dd95..1b2a698f4 100644 --- a/tags/index.html +++ b/tags/index.html @@ -1,5 +1,5 @@ Tags | HugeGraph -

    Tags

    +

    Tags