Grakn Client Java 2.0.0-alpha
Documentation: http://dev.docs.grakn.ai/docs/client-api/java
Distribution
Available through https://repo.grakn.ai
<repositories>
<repository>
<id>repo.grakn.ai</id>
<url>https://repo.grakn.ai/repository/maven/</url>
</repository>
</repositories>
<dependencies>
<dependency>
<groupid>io.grakn.client</groupid>
<artifactid>grakn-client</artifactid>
<version>2.0.0-alpha</version>
</dependency>
</dependencies>
New Client-Server Protocol: a Reactive Stream
With the server performance scaled, we need to ensure the client-server communication was not a bottleneck. We want the client application to leverage the server's asynchronous parallel computation to receive as many answers as possible, as soon as they are ready. However, we don't want the client application to be overwhelmed with server responses. So, we needed some form of "back-pressure". However, to maintain maximum throughput, everything had to be non-blocking. Sounds familiar? Well, it's the "reactive stream" problem.
We took inspiration from Java Flow and Akka Stream, and built our own reactive stream over GRPC, as lightweight as possible, with our unique optimisations. When an application sends a query from the client to the server, a (configurable) batch of asynchronously computed answers will immediately be streamed from the server to the client. This reduces network roundtrips and increases throughput. Once the first batch is consumed, the client will request another batch. We remove waiting time between the first and second batch, by predicting that duration and streaming back surplus answers for a period of that duration, at the end of every batch. This allows us to maintain a continuous stream of answers at maximum throughput, without overflowing the application.
We then hit the max limit of responses GRPC can send per second. So the last trick was to bundle multiple query answers into a single server RPC "response". The impact on query response time was negligible, but it dramatically increased answer throughput again.
The new client architecture and Protobuf definitions are also hugely simplified to ease the developers' effort to build their own client libraries.
Please refer to full release notes of Grakn 2.0.0-alpha to see the changes in Grakn 2.0.0.