Skip to content
This repository has been archived by the owner on Oct 9, 2023. It is now read-only.

Add grabl-marker for autoupdating #17

Merged
merged 1 commit into from
Mar 1, 2019
Merged

Add grabl-marker for autoupdating #17

merged 1 commit into from
Mar 1, 2019

Conversation

vmax
Copy link
Contributor

@vmax vmax commented Mar 1, 2019

What is the goal of this PR?

Needed for https://github.com/graknlabs/grabl/issues/14

What are the changes implemented in this PR?

Adds grabl-marker so dependency script knows how to update it

@vmax vmax added this to the v1.5 milestone Mar 1, 2019
@vmax vmax requested a review from haikalpribadi March 1, 2019 13:57
@haikalpribadi haikalpribadi merged commit 01d2c14 into typedb:master Mar 1, 2019
dmitrii-ubskii added a commit to dmitrii-ubskii/typedb-client-nodejs that referenced this pull request Sep 1, 2023
## What is the goal of this PR?

We remove the requirement that typedb-client be used within a tokio
runtime, making the library runtime-agnostic.
We also remove the distinction between core and cluster, and replace the
`Client` entry point with more fundamental `Connection`.

## What is the motivation behind the changes?

### Encapsulate tokio runtime

This change serves three main purposes:

1. **Remove the requirement that the typedb-client library is run from a
tokio runtime.** The gRPC crate we use, `tonic`, and its networking
dependencies heavily rely on being used within a `tokio` runtime. That's
a fairly big restriction to place on all user code. Now that all RPC
interaction is hidden away in a background thread, all the API exposed
to the user can be fully runtime-agnostic, and, potentially down the
line, available in synchronous contexts as well.

2. **Eliminate the session close deadlocks in single-threaded
runtimes.** We create our own _system_ thread, which spawns a tokio
runtime and handles the RPC communication. This runs independently from
the user-facing runtime and as such is not affected by it.
Previously, if the user code happened to run in a single-threaded
runtime, dropping the last handle for a session would send a
`session_close()` request and block the only executor thread tokio had
available, deadlocking the entire application. This happened because, to
avoid sessions timing out on the server side, the session's drop method
would block until it received a signal that the request had been sent.
Because RPC communication is done asynchronously, it must be done in a
different task from the one that's performing the drop, which is always
blocking and cannot yield to said task.

3. **Encapsulate away RPC implementation details.** Should we ever make
a decision to move away from gRPC+protobuf, the change would only
require us to change a small set of files, namely the contents of
`connection::network`. The communication under this new model uses
Rust-native `Request` and `Response` enums, abstracting away the
protocol structures.

### Dissolve Client into underlying Connection

Requiring that a `session` may not outlive its `client`, and that a
`transaction` may not outlive its `session`, meant that to preserve
consistency we had to either extend the lifetimes of both `client` and
`session`, or require the user to share the `client` handle between
threads explicitly, even if they don't intend to use it beyond opening a
`session`.

Removing the top-of-the-hierarchy `Client` type and replacing it with a
primitive clonable `Connection` allows us to partially invert the
hierarchy, such that `DatabaseManager` and `Session` can explicitly own
the resources they rely upon. This is in line with how established Rust
crates (e.g. `tonic`, `mysql`) treat connections.

For multithreading or concurrency, the ownership of a `session` needs to
be explicitly shared between the threads or tasks, whether it be via
using shared pointers (read: `Arc<_>`) or explicit scope bounds (such as
`std::thread::scope()` in a synchronous version).

### Remove distinction between core and cluster

The shift from TypeDB Core + TypeDB Cluster to just TypeDB (Cluster /
Cloud) by default is reflected in the architecture. We now treat a core
server as effectively a single-node cluster instance that lacks
enterprise facilities (viz. user management).

This change greatly improves user experience: all code written to
interact with an open-source TypeDB instance is automatically valid for
the production instance with a simple change in initialization. As a
side-effect, this also helps us ensure that all integration tests
implemented for core are also automatically implemented for cluster.

Merging core and cluster also vastly simplifies the internal structure
of the library, as only a few places have to know about which backend
they are running against, specifically the portions that deal with
authentication and, some day, user management.

## What are the changes implemented in this PR?

Major changes:
- New `connection` module:
- `Connection` and `ServerConnection` conceptually roughly correspond to
`ClusterRPC` and `ClusterServerRPC`: `Connection`'s only job is to
manage the set of `ServerConnection`s, i.e. connections to individual
nodes of the server;
- `Connection`, created by user, spawns a background single-threaded
tokio runtime which will houses all request handlers;
- `ServerConnection` performs the actual message-passing between user
code and its dedicated request dispatcher.
- Move `common::rpc` module under `connection::network`:
- move all protobuf serialization/deserialization into
`connection::network::proto`, fully isolated from the rest of the crate;
- provide native message enums intended for inter-thread communication
(cf. `common::info` for crate-wide data structures);
- merge `CoreRPC`, `ServerRPC`, and `ClusterServerRPC` into single
`RPCStub`;
- add `RPCTransmitter`: a dispatcher agent meant to run in the
background tokio runtime, which handles the communication with the
server; its job is to:
    - listen for user requests over an inter-thread mpsc channel,
    - serialize the requests for tonic;
    - deserialize the responses;
    - send the responses back over the provided callback channel;
- overhaul `TransactionRPC` into `TransactionTransmitter`, analogous to
the `RPCTransmitter` above;
- its listener loop buffers the requests into batches before dispatching
them to the server;
- because, unlike `RPCTransmitter`, `TransactionTransmitter` wraps a
bidirectional stream, it also has an associated listener agent that
handles the user callbacks and auto-requests stream continuation.
- Remove `connection::core` and merge `connection::{cluster, server}`,
and `query` into a single top-level `database` module:
- `ServerDatabase` and `ServerSession` are now hidden implementation
details that handle communication with an individual node;
    - `Client`, as mentioned, has been removed entirely.

Minor changes:
- remove the no longer needed `async_dispatch` helper macro;
- restructure the `queries_...` tests into a single `queries` test
module that handles both core and cluster connections using a helper
permutation test macro;
- add a `compatibility` test module that ensures the API is async
runtime-agnostic.

Closes typedb#7, typedb#16, typedb#17, typedb#20, typedb#22, typedb#30.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants