diff --git a/.github/0a3f26d8.png b/.github/0a3f26d8.png
new file mode 100644
index 00000000..a9ef15f1
Binary files /dev/null and b/.github/0a3f26d8.png differ
diff --git a/.github/mermaid-diagram-20191017172946.svg b/.github/mermaid-diagram-20191017172946.svg
new file mode 100644
index 00000000..3eb299ae
--- /dev/null
+++ b/.github/mermaid-diagram-20191017172946.svg
@@ -0,0 +1,1328 @@
+
\ No newline at end of file
diff --git a/.github/mermaid-diagram-20191017173106.svg b/.github/mermaid-diagram-20191017173106.svg
new file mode 100644
index 00000000..159fdbf4
--- /dev/null
+++ b/.github/mermaid-diagram-20191017173106.svg
@@ -0,0 +1,665 @@
+
\ No newline at end of file
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 3bc92d9e..ec295848 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,4 +1,12 @@
+# π₯ Breaking Changes (`v0.0.45 -> v0.0.46`)
+
+The new GNES Flow API introduced since `v0.0.46` has become the main API of GNES. It provides a pythonic and intuitive way of building pipelines in GNES, enabling run/debug on a local machine. It also supports graph visualization, swarm/k8s config export, etc. More information about [GNES Flow can be found at here](http://doc.gnes.ai/en/latest/api/gnes.flow.html).
+
+As a consequence, the [`composer` module](/gnes/composer) as well as `gnes compose` CLI and GNES board web UI will be removed in the next releases.
+
+GNES board will be redesigned using the GNES Flow API. We highly [welcome your contribution on this thread](CONTRIBUTING.md)!
+
# Release Note (`v0.0.45`)
> Release time: 2019-10-15 14:01:07
diff --git a/README.md b/README.md
index b2692692..83463e7d 100644
--- a/README.md
+++ b/README.md
@@ -145,16 +145,12 @@ Besides the `alpine` image optimized for the space, we also provide Buster (Debi
-We also provide a public mirror hosted on Tencent Cloud, from which Chinese mainland users can pull the image faster.
+We also provide a public mirror hosted on Tencent Cloud and [Github packages](https://github.com/gnes-ai/gnes/packages/). Select the mirror that serves you well.
```bash
docker login --username=xxx ccr.ccs.tencentyun.com # login to Tencent Cloud so that we can pull from it
docker run ccr.ccs.tencentyun.com/gnes/gnes:latest-alpine
-```
-
-π Since 2019.9.24, you can also pull GNES from [Github packages](https://github.com/gnes-ai/gnes/packages/). Note, older versions/tags before 2019.9.24 are not uploaded.
-
-```bash
+# OR via Github package
docker login --username=xxx docker.pkg.github.com/gnes-ai/gnes # login to github package so that we can pull from it
docker run docker.pkg.github.com/gnes-ai/gnes/gnes:latest-alpine
```
@@ -231,21 +227,23 @@ Either way, if you end up reading the following message after `$ gnes` or `$ doc
- [π£ Preliminaries](#-preliminaries)
* [Microservice](#microservice)
* [Runtime](#runtime)
-- [Demo for the impatient](#demo-for-the-impatient)
- * [Semantic poem search in 3-minutes or less](#building-a-semantic-poem-search-engine-in-3-minutes-or-less)
-- [Build your first GNES app on local machine](#build-your-first-gnes-app-on-local-machine)
-- [Scale your GNES app to the cloud](#scale-your-gnes-app-to-the-cloud)
-- [Customize GNES on your need](#customize-gnes-to-your-need)
-- [Take-home messages](#take-home-messages)
- * [π¨βπ»οΈWhat's next?](#-whats-next)
+- [Building a flower search engine in 3 minutes](#building-a-flower-search-engine-in-3-minutes)
+ * [Define the indexing workflow:](#define-the-indexing-workflow-)
+ * [Indexing flower image data](#indexing-flower-image-data)
+ * [Querying similar flowers](#querying-similar-flowers)
+- [Elastic made easy](#elastic-made-easy)
+- [Deploying a flow via Docker Swarm/Kubernetes](#deploying-a-flow-via-docker-swarmkubernetes)
+- [Building a cloud-native semantic poem search engine](#building-a-cloud-native-semantic-poem-search-engine)
+- [π¨βπ»οΈTake-home messages](#-take-home-messages)
+
### π£ Preliminaries
-Before we start, let me first introduce two important concepts serving as the backbone of GNES: **microservice** and **runtime**.
+Before we start, let me first introduce two important concepts serving as the backbone of GNES: **microservice** and **workflow**.
#### Microservice
-For machine learning engineers and data scientists who are not familiar with the concept of *cloud-native* and *microservice*, one can picture a microservice as an app (on your smartphone). Each app runs independently, and an app may cooperate with other apps to accomplish a task. In GNES, we have four fundamental apps, aka. microservices, they are:
+For machine learning engineers and data scientists who are not familiar with the concept of *cloud-native* and *microservice*, one can picture a microservice as an app on your smartphone. Each app runs independently, and an app may cooperate with other apps to accomplish a task. In GNES, we have four fundamental apps, aka. microservices, they are:
- [**Preprocessor**](http://doc.gnes.ai/en/latest/chapter/microservice.html#preprocess): transforming a real-world object to a list of workable semantic units;
- [**Encoder**](http://doc.gnes.ai/en/latest/chapter/microservice.html#encode): representing a semantic unit with vector representation;
@@ -254,373 +252,187 @@ For machine learning engineers and data scientists who are not familiar with the
In GNES, we have implemented dozens of preprocessor, encoder, indexer to process different content forms, such as image, text, video. It is also super easy to plug in your own implementation, which we shall see an example in the sequel.
-#### Runtime
+#### Workflow
-Okay, now that we have a bunch of apps, what are we expecting them to do? In a typical search system, there are two fundamental tasks: **indexing** and **querying**. Indexing is storing the documents, querying is searching the documents, pretty straightforward. In a neural search system, one may also face another task: **training**, where one fine-tunes an encoder/preprocessor according to the data distribution in order to achieve better search relevance. These three tasks: indexing, querying and training are what we call three **runtimes** in GNES.
+Now that we have a bunch of apps, what are we expecting them to do? A typical search system has two fundamental tasks: **index** and **query**. Index is storing the documents, query is searching the documents. In a neural search system, one may face another task: **train**, where one fine-tunes an encoder/preprocessor according to the data distribution in order to achieve better search relevance.
-π‘ The key to understand GNES is to know *which runtime requires what microservices, and each microservice does what*.
+These three tasks correspond to three different **workflows** in GNES.
-### Demo for the impatient
+### Building a flower search engine in 3 minutes
-#### Building a semantic poem search engine in 3-minutes or less
+> π£ Since `v0.0.46` [GNES Flow](http://doc.gnes.ai/en/latest/api/gnes.flow.html) has become the main interface of GNES. GNES Flow provides a **pythonic** and **intuitive** way to implement a **workflow**, enabling users to run or debug GNES on a local machine. By default, GNES Flow orchestrates all microservices using multi-thread or multi-process backend, it can be also exported to a Docker Swarm/Kubernetes YAML config, allowing one to deliver GNES to the cloud.
-For the impatient, we present a complete demo using GNES that enables semantic index and query on poems.
-Please checkout [this repository for details](https://github.com/gnes-ai/demo-poems-ir) and follow the instructions to reproduce.
+π° The complete example and the corresponding Jupyter Notebook [can be found at here](https://github.com/gnes-ai/demo-gnes-flow).
-
-### Build your first GNES app on local machine
+In this example, we will use the new `gnes.flow` API (`gnes >= 0.0.46` is required) to build a toy image search system for indexing and retrieving [flowers](http://www.robots.ox.ac.uk/~vgg/data/flowers/17/) based on their similarities.
-Let's start with a typical indexing procedure by writing a YAML config (see the left column of the table):
+#### Define the indexing workflow:
-
+Let's first define the indexing workflow by:
-Now let's see what the YAML config says. First impression, it is pretty intuitive. It defines a pipeline workflow consists of preprocessing, encoding and indexing, where the output of the former component is the input of the next. This pipeline is a typical workflow of *index* or *query* runtime. Under each component, we also associate it with a YAML config specifying how it should work. Right now they are not important for understanding the big picture, nonetheless curious readers can checkout how each YAML looks like by expanding the items below.
-
-
- Preprocessor config: text-prep.yml (click to expand...)
-
-```yaml
-!SentSplitPreprocessor
-parameters:
- start_doc_id: 0
- random_doc_id: True
- deliminator: "[.!?]+"
-gnes_config:
- is_trained: true
+```python
+from gnes.flow import Flow
+flow = (Flow(check_version=False)
+ .add_preprocessor(name='prep', yaml_path='yaml/prep.yml')
+ .add_encoder(yaml_path='yaml/incep.yml')
+ .add_indexer(name='vec_idx', yaml_path='yaml/vec.yml')
+ .add_indexer(name='doc_idx', yaml_path='yaml/doc.yml', recv_from='prep')
+ .add_router(name='sync', yaml_path='BaseReduceRouter', num_part=2, recv_from=['vec_idx', 'doc_idx']))
```
-
-
- Encoder config: gpt2.yml (click to expand...)
-
-```yaml
-!PipelineEncoder
-components:
- - !GPT2Encoder
- parameters:
- model_dir: $GPT2_CI_MODEL
- pooling_stragy: REDUCE_MEAN
- gnes_config:
- is_trained: true
- - !PCALocalEncoder
- parameters:
- output_dim: 32
- num_locals: 8
- gnes_config:
- batch_size: 2048
- - !PQEncoder
- parameters:
- cluster_per_byte: 8
- num_bytes: 8
-gnes_config:
- work_dir: ./
- name: gpt2bin-pipe
-```
+Here, we use [the inceptionV4 pretrained model](https://github.com/tensorflow/models/tree/master/research/slim) as the encoder and the built-in indexers for storing vectors and documents. The flow should be quite self-explanatory, if not, you can always convert it to a SVG image and see its visualization:
-
-
-
- Indexer config: b-indexer.yml (click to expand...)
-
-```yaml
-!BIndexer
-parameters:
- num_bytes: 8
- data_path: /out_data/idx.binary
-gnes_config:
- work_dir: ./
- name: bindexer
-```
-
-
-On the right side of the above table, you can see how the actual data flow looks like. There is an additional component `gRPCFrontend` automatically added to the workflow, it allows you to feed the data and fetch the result via gRPC protocol through port `5566`.
-
-Now it's time to run! [GNES board](https://board.gnes.ai) can automatically generate a starting script/config based on the YAML config you give, saving troubles of writing them on your own.
+```python
+flow.build(backend=None).to_url()
+```
+We simply sample 20 flower images as queries and search for their top-10 similar images:
-This suggests the GNES app is ready and waiting for the incoming data. You may now feed data to it through the `gRPCFrontend`. Depending on your language (Python, C, Java, Go, HTTP, Shell, etc.) and the content form (image, video, text, etc), the data feeding part can be slightly different.
+```python
+num_q = 20
+topk = 10
+sample_rate = 0.05
-To stop a running GNES, you can simply do control + c.
+# do the query
+results = []
+with flow.build(backend='process') as fl:
+ for q, r in fl.query(bytes_gen=read_flowers(sample_rate)):
+ q_img = q.search.query.raw_bytes
+ r_imgs = [k.doc.raw_bytes for k in r.search.topk_results]
+ r_scores = [k.score.value for k in r.search.topk_results]
+ results.append((q_img, r_imgs, r_scores))
+ if len(results) > num_q:
+ break
+```
+Here is the result, where queries are on the first row.
-### Scale your GNES app to the cloud
+![](.github/0a3f26d8.png)
-Now let's juice it up a bit. To be honest, building a single-machine process-based pipeline is not impressive anyway. The true power of GNES is that you can scale any component at any time you want. Encoding is slow? Adding more machines. Preprocessing takes too long? More machines. Index file is too large? Adding shards, aka. more machines!
+### Elastic made easy
-In this example, we compose a more complicated GNES workflow for images. This workflow consists of multiple preprocessors, encoders and two types of indexers. In particular, we introduce two types of indexers: one for storing the encoded binary vectors, the other for storing the original images, i.e. full-text index. These two types of indexers work in parallel. Check out the YAML file on the left side of table for more details, note how `replicas` is defined for each component.
+To increase the number of parallel components in the flow, simply add `replicas` to each service:
-
-
-You may realize that besides the `gRPCFrontend`, multiple `Router` have been added to the workflow. Routers serve as a message broker between microservices, determining how and where the message is received and sent. In the last pipeline example, the data flow is too simple so there is no need for adding any router. In this example routers are necessary for connecting multiple preprocessors and encoders, otherwise preprocessors wouldn't know where to send the message. GNES Board automatically adds router to the workflow when necessary based on the type of two consecutive layers. It may also add stacked routers, as you can see between encoder and indexer in the right graph.
-
-Again, the detailed YAML config of each component is not important for understanding the big picture, hence we omit it for now.
-
-This time we will run GNES via DockerSwarm. To do that simply copy the generated DockerSwarm YAML config to a file say `my-gnes.yml`, and then do
-```bash
-docker stack deploy --compose-file my-gnes.yml gnes-531
-```
-
-Note that `gnes-531` is your GNES stack name, keep that name in mind. If you forget about that name, you can always use `docker stack ls` to find out. To tell whether the whole stack is running successfully or not, you can use `docker service ls -f name=gnes-531`. The number of replicas `1/1` or `4/4` suggests everything is fine.
+```python
+flow = (Flow(check_version=False, ctrl_with_ipc=True)
+ .add_preprocessor(name='prep', yaml_path='yaml/prep.yml', replicas=5)
+ .add_encoder(yaml_path='yaml/incep.yml', replicas=6)
+ .add_indexer(name='vec_idx', yaml_path='yaml/vec.yml')
+ .add_indexer(name='doc_idx', yaml_path='yaml/doc.yml', recv_from='prep')
+ .add_router(name='sync', yaml_path='BaseReduceRouter', num_part=2, recv_from=['vec_idx', 'doc_idx']))
+```
-Generally, a complete and successful Docker Swarm starting process should look like the following:
+```python
+flow.build(backend=None).to_url()
+```
+### Building a cloud-native semantic poem search engine
-
+In this example, we will build a semantic poem search engine using GNES. Unlike the previous flower search example, here we run each service as an isolated Docker container and then orchestrate them via Docker Swarm. It represents a common scenario in the cloud settings. You will learn how to use powerful and customized GNES images from [GNES hub](https://github.com/gnes-ai/hub).
+π° Please checkout [this repository for details](https://github.com/gnes-ai/demo-poems-ir) and follow the instructions to reproduce.
-### Take-home messages
+
-Now that you know how to compose and run a GNES app, let's make a short recap of what we have learned.
-- GNES is *all-in-microservice*, there are four fundamental components: preprocessor, encoder, indexer and router.
-- GNES has three runtimes: training, indexing, and querying. The key to compose a GNES app is to clarify *which runtime requires what microservices (defined in the YAML config), and each microservice does what (defined in the component-wise YAML config)*.
-- GNES requires an orchestration engine to coordinate all microservices. It supports Kubernetes, Docker Swarm and a shell-based multi-process solution.
-- [GNES Board](https://board.gnes.ai) is a convenient tool for visualizing the workflow, generating starting script or cloud configuration.
-- The real power of GNES is elasticity on every level. Router is automatically added between microservices for connecting the pieces together.
+### π¨βπ»οΈ Take-home messages
+Let's make a short recap of what we have learned.
-#### π¨βπ»οΈ What's next?
+- GNES is *all-in-microservice*, there are four fundamental components: preprocessor, encoder, indexer and router.
+- GNES has three typical workflows: train, index, and query.
+- One can leverage [GNES Flow API](http://doc.gnes.ai/en/latest/api/gnes.flow.html) to define, modify, export or even visualize a workflow.
+- GNES requires an orchestration engine to coordinate all microservices. It supports Kubernetes, Docker Swarm, or built-in multi-process/thread solution.
-The next step is feeding data to GNES for training, indexing and querying. Checkout the [tutorials](#tutorial) and [documentations](#documentation) for more details.
Documentation
@@ -645,7 +457,6 @@ The official documentation of GNES is hosted on [doc.gnes.ai](https://doc.gnes.a
- Using GNES with Kubernetes
- Using GNES in other language (besides Python)
- Serves HTTP-request with GNES in an end-to-end way
-
- Migrating from [`bert-as-service`](https://github.com/hanxiao/bert-as-service)
Benchmark
diff --git a/docs/chapter/swarm-tutorial.md b/docs/chapter/swarm-tutorial.md
new file mode 100644
index 00000000..aa9e28df
--- /dev/null
+++ b/docs/chapter/swarm-tutorial.md
@@ -0,0 +1,333 @@
+# Using GNES with Docker Swarm
+
+### Build your first GNES app on local machine
+
+Let's start with a typical indexing procedure by writing a YAML config (see the left column of the table):
+
+
+
+Now let's see what the YAML config says. First impression, it is pretty intuitive. It defines a pipeline workflow consists of preprocessing, encoding and indexing, where the output of the former component is the input of the next. This pipeline is a typical workflow of *index* or *query* runtime. Under each component, we also associate it with a YAML config specifying how it should work. Right now they are not important for understanding the big picture, nonetheless curious readers can checkout how each YAML looks like by expanding the items below.
+
+
+ Preprocessor config: text-prep.yml (click to expand...)
+
+```yaml
+!SentSplitPreprocessor
+parameters:
+ start_doc_id: 0
+ random_doc_id: True
+ deliminator: "[.!?]+"
+gnes_config:
+ is_trained: true
+```
+
+
+
+ Encoder config: gpt2.yml (click to expand...)
+
+```yaml
+!PipelineEncoder
+components:
+ - !GPT2Encoder
+ parameters:
+ model_dir: $GPT2_CI_MODEL
+ pooling_stragy: REDUCE_MEAN
+ gnes_config:
+ is_trained: true
+ - !PCALocalEncoder
+ parameters:
+ output_dim: 32
+ num_locals: 8
+ gnes_config:
+ batch_size: 2048
+ - !PQEncoder
+ parameters:
+ cluster_per_byte: 8
+ num_bytes: 8
+gnes_config:
+ work_dir: ./
+ name: gpt2bin-pipe
+```
+
+
+
+
+ Indexer config: b-indexer.yml (click to expand...)
+
+```yaml
+!BIndexer
+parameters:
+ num_bytes: 8
+ data_path: /out_data/idx.binary
+gnes_config:
+ work_dir: ./
+ name: bindexer
+```
+
+
+On the right side of the above table, you can see how the actual data flow looks like. There is an additional component `gRPCFrontend` automatically added to the workflow, it allows you to feed the data and fetch the result via gRPC protocol through port `5566`.
+
+Now it's time to run! [GNES board](https://board.gnes.ai) can automatically generate a starting script/config based on the YAML config you give, saving troubles of writing them on your own.
+
+
+
+This suggests the GNES app is ready and waiting for the incoming data. You may now feed data to it through the `gRPCFrontend`. Depending on your language (Python, C, Java, Go, HTTP, Shell, etc.) and the content form (image, video, text, etc), the data feeding part can be slightly different.
+
+To stop a running GNES, you can simply do control + c.
+
+
+### Scale your GNES app to the cloud
+
+Now let's juice it up a bit. To be honest, building a single-machine process-based pipeline is not impressive anyway. The true power of GNES is that you can scale any component at any time you want. Encoding is slow? Adding more machines. Preprocessing takes too long? More machines. Index file is too large? Adding shards, aka. more machines!
+
+In this example, we compose a more complicated GNES workflow for images. This workflow consists of multiple preprocessors, encoders and two types of indexers. In particular, we introduce two types of indexers: one for storing the encoded binary vectors, the other for storing the original images, i.e. full-text index. These two types of indexers work in parallel. Check out the YAML file on the left side of table for more details, note how `replicas` is defined for each component.
+
+
+
+You may realize that besides the `gRPCFrontend`, multiple `Router` have been added to the workflow. Routers serve as a message broker between microservices, determining how and where the message is received and sent. In the last pipeline example, the data flow is too simple so there is no need for adding any router. In this example routers are necessary for connecting multiple preprocessors and encoders, otherwise preprocessors wouldn't know where to send the message. GNES Board automatically adds router to the workflow when necessary based on the type of two consecutive layers. It may also add stacked routers, as you can see between encoder and indexer in the right graph.
+
+Again, the detailed YAML config of each component is not important for understanding the big picture, hence we omit it for now.
+
+This time we will run GNES via DockerSwarm. To do that simply copy the generated DockerSwarm YAML config to a file say `my-gnes.yml`, and then do
+```bash
+docker stack deploy --compose-file my-gnes.yml gnes-531
+```
+
+Note that `gnes-531` is your GNES stack name, keep that name in mind. If you forget about that name, you can always use `docker stack ls` to find out. To tell whether the whole stack is running successfully or not, you can use `docker service ls -f name=gnes-531`. The number of replicas `1/1` or `4/4` suggests everything is fine.
+
+Generally, a complete and successful Docker Swarm starting process should look like the following:
+
+
+
+
+When the GNES stack is ready and waiting for the incoming data, you may now feed data to it through the `gRPCFrontend`. Depending on your language (Python, C, Java, Go, HTTP, Shell, etc.) and the content form (image, video, text, etc), the data feeding part can be slightly different.
+
+
+To stop a running GNES stack, you can use `docker stack rm gnes-531`.
+
+
+### Customize GNES to your need
+
+With the help of GNES Board, you can easily compose a GNES app for different purposes. The table below summarizes some common compositions with the corresponding workflow visualizations. Note, we hide the component-wise YAML config (i.e. `yaml_path`) for the sake of clarity.
+
+