Releases: typedb/typedb-driver-python
Grakn Client Python 2.0.0-alpha-3
PyPI package: https://pypi.org/project/grakn-client
Documentation: https://dev.docs.grakn.ai/docs/client-api/python
Distribution
Available through https://pypi.org
pip install grakn-client==2.0.0-alpha-3
Please refer to full release notes of 2.0.0-alpha to see the changes contained in 2.0.0.
Grakn Client Python 2.0.0-alpha-2
PyPI package: https://pypi.org/project/grakn-client
Documentation: https://dev.docs.grakn.ai/docs/client-api/python
Distribution
Available through https://pypi.org
pip install grakn-client==2.0.0-alpha-2
Please refer to full release notes of 2.0.0-alpha to see the changes contained in 2.0.0.
Grakn Client Python 2.0.0-alpha
PyPI package: https://pypi.org/project/grakn-client
Documentation: https://dev.docs.grakn.ai/docs/client-api/python
Distribution
Available through https://pypi.org
pip install grakn-client==2.0.0-alpha
New Client-Server Protocol: a Reactive Stream
With the server performance scaled, we need to ensure the client-server communication was not a bottleneck. We want the client application to leverage the server's asynchronous parallel computation to receive as many answers as possible, as soon as they are ready. However, we don't want the client application to be overwhelmed with server responses. So, we needed some form of "back-pressure". However, to maintain maximum throughput, everything had to be non-blocking. Sounds familiar? Well, it's the "reactive stream" problem.
We took inspiration from Java Flow and Akka Stream, and built our own reactive stream over GRPC, as lightweight as possible, with our unique optimisations. When an application sends a query from the client to the server, a (configurable) batch of asynchronously computed answers will immediately be streamed from the server to the client. This reduces network roundtrips and increases throughput. Once the first batch is consumed, the client will request another batch. We remove waiting time between the first and second batch, by predicting that duration and streaming back surplus answers for a period of that duration, at the end of every batch. This allows us to maintain a continuous stream of answers at maximum throughput, without overflowing the application.
We then hit the max limit of responses GRPC can send per second. So the last trick was to bundle multiple query answers into a single server RPC "response". The impact on query response time was negligible, but it dramatically increased answer throughput again.
The new client architecture and Protobuf definitions are also hugely simplified to ease the developers' effort to build their own client libraries.
Please refer to full release notes of Grakn 2.0.0-alpha to see the changes in Grakn 2.0.0.
Grakn Client Python 1.8.1
PyPI package: https://pypi.org/project/grakn-client
Documentation: https://dev.grakn.ai/docs/client-api/python
Distribution
Available through https://pypi.org
pip install grakn-client
Or you can upgrade your local installation with:
pip install -U grakn-client
New Features
Bugs Fixed
- Fix leaking GRPC threads on transaction error.
We block the GRPC request observer in order to wait for new client requests. If the transaction errors and we do not unblock this observer, the GRPC thread will be left waiting forever. Previously, we had an error case that would result in this, so this PR patches that case by allowingclose()
to work correctly even on an error.
Code Refactors
Other Improvements
-
Fix CI by including needed dependencies.
Previous PR (#120) didn't account for need to call@graknlabs_dependencies//tool/sync:dependencies
in one of the CI jobs. Therefore, we need to restore some imports that are actually needed. -
Cleanup WORKSPACE file from extraneous load statements.
In order for keeping codebase clean and maintainable, extraneous dependencies should not be present inWORKSPACE
Grakn Client Python 1.8.0
PyPI package: https://pypi.org/project/grakn-client
Documentation: https://dev.grakn.ai/docs/client-api/python
Distribution
Available through https://pypi.org
pip install grakn-client
Or you can upgrade your local installation with:
pip install -U grakn-client
New Features
-
Introduce further Query Options.
We introduce modifiedinfer
, and newbatch_size
, andexplain
options for queries:Transaction.query("...", infer=Transaction.Options.SERVER_DEFAULT, explain=Transaction.Options.SERVER_DEFAULT, batch_size=Transaction.Options.SERVER_DEFAULT)
The default
SERVER_DEFAULT
value means that the server will automatically choose the default value forinfer
,explain
, andbatch_size
. For reference, the server will default toinfer = True
,explain = True
, andbatch_size = 50
.
** useexplain=true
if you want to retrieve explanations from your query ** This was introduced to ensure correct Explanations, without blowing up Transaction memory when not required. -
Add future-style get for explicit waiting and error handling.
Since the introduction of asynchronous query processing, the error handling model has become less clear, as an error could be picked up on a line unrelated to its corresponding query. In order to allow clients to explicitly consume query completion, aget()
method is added to the query result (iterator) which will block until the results are received, or throw an exception on error.
Clients looking to benefit from the asynchronous processing can continue without using theget()
syntax. -
Add Explanation.get_rule().
TheRule
that corresponds to an explanation is now being returned in the protocol responses, but the pythonExplanation
object does not record it. This PR records it if the rule is valid, else sets it to None.
Bugs Fixed
- Fix explanation throwing an exception.
A bug was introduced with local concepts that made it impossible to fetch explanations.
Code Refactors
- Remove implicit, rename date to datetime, rename datatype to valuetype.
This Change synchronises with changes in Grakn Core (typedb/typedb#5722) that remove implicit types, and also updates to no longer usedate
, but instead usedatetime
(including a protocol update). Finally, we also propagate the change fromdatatype
to `valuetype.
Other Improvements
Grakn Client Python 1.7.2
PyPI package: https://pypi.org/project/grakn-client
Documentation: https://dev.grakn.ai/docs/client-api/python
NOTE: This is the last client-python
release that will support Python 2.x
Distribution
Available through https://pypi.org
pip install grakn-client
Or you can upgrade your local installation with:
pip install -U grakn-client
New Features
Bugs Fixed
- Fix queries larger than 50.
To fix a bug that caused queries with more than 50 results to fail. This case was not correctly covered with a regression test and that has now been fixed.
Code Refactors
Other Improvements
Grakn Client Python 1.7.1
PyPI package: https://pypi.org/project/grakn-client
Documentation: https://dev.grakn.ai/docs/client-api/python
Distribution
Available through https://pypi.org
pip install grakn-client
Or you can upgrade your local installation with:
pip install -U grakn-client
New Features
Bugs Fixed
Code Refactors
Other Improvements
-
Add async iteration.
To add async iteration toclient-python
so that results can be iterated whilst still waiting to be returned.
This implementation follows the architecture found inclient-nodejs
. When requests are made, they are immediately sent to the server and at the same time, a "response resolver" is placed onto the "resolver" queue. The response resolver queue ensures that responses are handled in the order that the requests are sent, thus allowing multiple requests to be sent whilst the responses are still being resolved, which is what enables us to handle async correctly.
When attempting to retrieve a response, the GRPC stream is iterated and the responses are buffered into each resolver until the desired response is found and can be returned. This ensures that messages are not lost due to out-of-order iteration. -
Relax strict dependencies on external packages.
Some of our users reported issues on external dependencies being too strict.
Grakn Client Python 1.7.0
PyPI package: https://pypi.org/project/grakn-client
Documentation: https://dev.grakn.ai/docs/client-api/python
Distribution
Available through https://pypi.org
pip install grakn-client
Or you can upgrade your local installation with:
pip install -U grakn-client
New Features
- Upgrade client-python to new protocol for grakn 1.7.
Update client-python to the new 1.0.5 protocol with changed iteration style and local/remote concept API split.
The new protocol significantly reduces the number of round trips necessary to complete queries, down to 1 in most cases, from the previous solution which required some multiple of the number of results. This should alleviate slowdown problems caused by connecting to a remote Grakn instance. Iteration now follows a streaming principle and results are pre-filled with data the user would normally be expected to retrieve via the concept API.
The step to split local and remote concepts helps us move towards "value" results rather than results which need to embed the transaction they were retrieved with. The added requirement to convert these concepts back to remote concepts should also make it clear to the user which methods require additional RPCs (where it wasn't clear beforehand that ALL methods required additional RPCs).
Bugs Fixed
Code Refactors
Other Improvements
- Remove python 2 compatibility import imap.
Fix an issue where a python 2 only import (imap
) was being used when the client is meant to be python 3 compatible.
Grakn Client Python 1.6.1
PyPI package: https://pypi.org/project/grakn-client
Documentation: https://dev.grakn.ai/docs/client-api/python
Distribution
Available through https://pypi.org
pip install grakn-client
Or you can upgrade your local installation with:
pip install -U grakn-client
New Features
- Delete collect_concepts.
Shortcut methodanswer_iterator.collect_concepts()
was leading to user confusion as it can lead to duplicate concepts being returned in the same list. As it doesn't really fit with the general paradigm of simplifying and removing syntactic sugar, we delete this method and ask users to replace calls with a piece of custom code for now.
Replacement is very easy whencollect_concepts
was associated with a query with 1 variable:
[ans.get("x") for ans in answer_iterator]
Bugs Fixed
N/A
Code Refactors
N/A
Other Improvements
-
Upgrade gRPC to 1.24.1.
Update gRPC to a more recent version -
Upgrade to Bazel 3.0.0.
Upgrade Bazel to latest upstream version -
Update license year.
Update year in Apache license and bumpbuild-tools
version
Grakn Client Python 1.6.0
PyPI package: https://pypi.org/project/grakn-client
Documentation: https://dev.grakn.ai/docs/client-api/python
Distribution
Available through https://pypi.org
pip install grakn-client
Or you can upgrade your local installation with:
pip install -U grakn-client
New Features
-
Retrievable Explanations.
As of typedb/typedb#5483, Grakn Core now:- has retrievable explanation trees
- only the
ConceptMap
answer type hasExplanation
pattern
has moved fromExplanation
toConceptMap
pattern
contains IDs for each variable as well as the query pattern
These changes are reflected in client python, as long with ahas_explanation()
method onConceptMap
.
-
Delete Queries return Void Answer Type, add Concept.is_deleted().
As of typedb/typedb-protocol#18, we have decided in a slight paradigm shift withdelete
queries:- straight delete queries
match...; delete...;
only return a message rather than the halfway house of all deleted Concept IDs (this was always awkward, should either return a Concept -- hard because vertex deleted, or nothing. We have opted for nothing). - If you want to know what was deleted, this implies your behavior was a retrieve followed by a delete. This retrieve should be performed explicltly by the user not implicitly by Grakn, ie. a
match...; get...;
followed bydelete
using Concept API, or using a separatematch...; delete;
query.
Also, new methodis_deleted()
exists on allConcept
as part of the Concept API.
- straight delete queries
Bugs Fixed
-
Refactor how version is provided to Python deployment rules.
typedb/bazel-distribution#195 changed how version is provided toassemble_pip
. This PR adaptsclient-python
to these changes. -
Fix CI by using Python 2 to run tests.
Until@rules_python
properly support installing Python 3 packages, we switch running Python tests back to Python 2
Code Refactors
- Refactor how version is provided to deploy_github rule.
Adapt@graknlabs_client_python
to latest changes in bazel-distribution (in particular, typedb/bazel-distribution#150)
Other Improvements
-
Rename is_closed to is_open.
To bring client-python in line with client-java, we rename the negativetx.is_closed()
to the positivetx.is_open()
. -
Use release-validate-deps to ensure that client-python depends on a released version of protocol.
We have added a validation step using//ci:release-validate-deps
in order to ensure that client-python is releasable only if it depends on a released version of protocol -
Adapt to latest @graknlabs_bazel_distribution.
Newest changes inbazel-distribution
(typedb/bazel-distribution#181) are backwards-incompatible.