Skip to content
This repository has been archived by the owner on Oct 9, 2023. It is now read-only.

Closing a session, immediately after it's been opened, throws an error #20

Closed
sorsaffari opened this issue Mar 4, 2019 · 0 comments · Fixed by #21
Closed

Closing a session, immediately after it's been opened, throws an error #20

sorsaffari opened this issue Mar 4, 2019 · 0 comments · Fixed by #21
Assignees
Milestone

Comments

@sorsaffari
Copy link
Contributor

Description

Closing a session, immediately after it's been opened, throws:
(node:57100) UnhandledPromiseRejectionWarning: Error: 2 UNKNOWN: null

Environment

  1. OS (where Grakn server runs): Mac OS 10.14.2
  2. Grakn version (and platform): 1.5.0 SNAPSHOT
  3. Grakn client-nodejs version: 1.5.0 SNAPSHOT
  4. Node.js version: 10.12.0

Reproducible Steps

Steps to create the smallest reproducible scenario:
index.js:

const Grakn = require("grakn-client");

openSession("social_network");

async function openSession (keyspace) {
    const client = new Grakn("localhost:48555");
    const session = await client.session(keyspace);
    await session.close();
};

run: node index

Expected Output

nothing to be logged into the terminal.

Actual Output

(node:57100) UnhandledPromiseRejectionWarning: Error: 2 UNKNOWN: null. Please check server logs for the stack trace.
    at Object.exports.createStatusError (/Users/soroushsaffari/tests/jclient/node_modules/grpc/src/common.js:91:15)
    at Object.onReceiveStatus (/Users/soroushsaffari/tests/jclient/node_modules/grpc/src/client_interceptors.js:1204:28)
    at InterceptingListener._callNext (/Users/soroushsaffari/tests/jclient/node_modules/grpc/src/client_interceptors.js:568:42)
    at InterceptingListener.onReceiveStatus (/Users/soroushsaffari/tests/jclient/node_modules/grpc/src/client_interceptors.js:618:8)
    at callback (/Users/soroushsaffari/tests/jclient/node_modules/grpc/src/client_interceptors.js:845:24)
(node:57100) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:57100) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

Additional information

if the session is used to create a transaction, followed by closing the transaction, no errors get thrown:

const Grakn = require("grakn-client");

openSession("social_network");

async function openSession (keyspace) {
    const client = new Grakn("localhost:48555");
    const session = await client.session(keyspace);
    const transaction = await session.transaction(Grakn.txType.READ);
    await transaction.close();
    await session.close();
};
@sorsaffari sorsaffari added this to the v1.5 milestone Mar 4, 2019
@marco-scoppetta marco-scoppetta assigned vmax and unassigned vmax Mar 5, 2019
dmitrii-ubskii added a commit to dmitrii-ubskii/typedb-client-nodejs that referenced this issue Sep 1, 2023
## What is the goal of this PR?

We remove the requirement that typedb-client be used within a tokio
runtime, making the library runtime-agnostic.
We also remove the distinction between core and cluster, and replace the
`Client` entry point with more fundamental `Connection`.

## What is the motivation behind the changes?

### Encapsulate tokio runtime

This change serves three main purposes:

1. **Remove the requirement that the typedb-client library is run from a
tokio runtime.** The gRPC crate we use, `tonic`, and its networking
dependencies heavily rely on being used within a `tokio` runtime. That's
a fairly big restriction to place on all user code. Now that all RPC
interaction is hidden away in a background thread, all the API exposed
to the user can be fully runtime-agnostic, and, potentially down the
line, available in synchronous contexts as well.

2. **Eliminate the session close deadlocks in single-threaded
runtimes.** We create our own _system_ thread, which spawns a tokio
runtime and handles the RPC communication. This runs independently from
the user-facing runtime and as such is not affected by it.
Previously, if the user code happened to run in a single-threaded
runtime, dropping the last handle for a session would send a
`session_close()` request and block the only executor thread tokio had
available, deadlocking the entire application. This happened because, to
avoid sessions timing out on the server side, the session's drop method
would block until it received a signal that the request had been sent.
Because RPC communication is done asynchronously, it must be done in a
different task from the one that's performing the drop, which is always
blocking and cannot yield to said task.

3. **Encapsulate away RPC implementation details.** Should we ever make
a decision to move away from gRPC+protobuf, the change would only
require us to change a small set of files, namely the contents of
`connection::network`. The communication under this new model uses
Rust-native `Request` and `Response` enums, abstracting away the
protocol structures.

### Dissolve Client into underlying Connection

Requiring that a `session` may not outlive its `client`, and that a
`transaction` may not outlive its `session`, meant that to preserve
consistency we had to either extend the lifetimes of both `client` and
`session`, or require the user to share the `client` handle between
threads explicitly, even if they don't intend to use it beyond opening a
`session`.

Removing the top-of-the-hierarchy `Client` type and replacing it with a
primitive clonable `Connection` allows us to partially invert the
hierarchy, such that `DatabaseManager` and `Session` can explicitly own
the resources they rely upon. This is in line with how established Rust
crates (e.g. `tonic`, `mysql`) treat connections.

For multithreading or concurrency, the ownership of a `session` needs to
be explicitly shared between the threads or tasks, whether it be via
using shared pointers (read: `Arc<_>`) or explicit scope bounds (such as
`std::thread::scope()` in a synchronous version).

### Remove distinction between core and cluster

The shift from TypeDB Core + TypeDB Cluster to just TypeDB (Cluster /
Cloud) by default is reflected in the architecture. We now treat a core
server as effectively a single-node cluster instance that lacks
enterprise facilities (viz. user management).

This change greatly improves user experience: all code written to
interact with an open-source TypeDB instance is automatically valid for
the production instance with a simple change in initialization. As a
side-effect, this also helps us ensure that all integration tests
implemented for core are also automatically implemented for cluster.

Merging core and cluster also vastly simplifies the internal structure
of the library, as only a few places have to know about which backend
they are running against, specifically the portions that deal with
authentication and, some day, user management.

## What are the changes implemented in this PR?

Major changes:
- New `connection` module:
- `Connection` and `ServerConnection` conceptually roughly correspond to
`ClusterRPC` and `ClusterServerRPC`: `Connection`'s only job is to
manage the set of `ServerConnection`s, i.e. connections to individual
nodes of the server;
- `Connection`, created by user, spawns a background single-threaded
tokio runtime which will houses all request handlers;
- `ServerConnection` performs the actual message-passing between user
code and its dedicated request dispatcher.
- Move `common::rpc` module under `connection::network`:
- move all protobuf serialization/deserialization into
`connection::network::proto`, fully isolated from the rest of the crate;
- provide native message enums intended for inter-thread communication
(cf. `common::info` for crate-wide data structures);
- merge `CoreRPC`, `ServerRPC`, and `ClusterServerRPC` into single
`RPCStub`;
- add `RPCTransmitter`: a dispatcher agent meant to run in the
background tokio runtime, which handles the communication with the
server; its job is to:
    - listen for user requests over an inter-thread mpsc channel,
    - serialize the requests for tonic;
    - deserialize the responses;
    - send the responses back over the provided callback channel;
- overhaul `TransactionRPC` into `TransactionTransmitter`, analogous to
the `RPCTransmitter` above;
- its listener loop buffers the requests into batches before dispatching
them to the server;
- because, unlike `RPCTransmitter`, `TransactionTransmitter` wraps a
bidirectional stream, it also has an associated listener agent that
handles the user callbacks and auto-requests stream continuation.
- Remove `connection::core` and merge `connection::{cluster, server}`,
and `query` into a single top-level `database` module:
- `ServerDatabase` and `ServerSession` are now hidden implementation
details that handle communication with an individual node;
    - `Client`, as mentioned, has been removed entirely.

Minor changes:
- remove the no longer needed `async_dispatch` helper macro;
- restructure the `queries_...` tests into a single `queries` test
module that handles both core and cluster connections using a helper
permutation test macro;
- add a `compatibility` test module that ensures the API is async
runtime-agnostic.

Closes typedb#7, typedb#16, typedb#17, typedb#20, typedb#22, typedb#30.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants