Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add pooled grpc transport #748

Merged
merged 134 commits into from
Apr 24, 2023

Conversation

daniel-sanche
Copy link
Contributor

@daniel-sanche daniel-sanche commented Mar 11, 2023

  • adds a new PooledBigtableGrpcAsyncIOTransport class, that manages a pool of grpc channels and round-robins requests between them
  • PooledBigtableGrpcAsyncIOTransport is implemented in a branch off https://github.com/googleapis/gapic-generator-python. The branch is added as a submodule, so changes can be tracked in this repo
  • implements client initialization
  • implements channel pool management
    • client will periodically refresh grpc channels to avoid hourly closure
    • client will ping backend on channel init to warm caches for future requests

Googlers see go/python-bigtable-v3-design, specifically the section titled Generator-Based Retries

google/cloud/bigtable/client.py Outdated Show resolved Hide resolved
google/cloud/bigtable/client.py Outdated Show resolved Hide resolved
google/cloud/bigtable/client.py Outdated Show resolved Hide resolved
google/cloud/bigtable/client.py Show resolved Hide resolved
google/cloud/bigtable/client.py Outdated Show resolved Hide resolved
google/cloud/bigtable/client.py Outdated Show resolved Hide resolved
google/cloud/bigtable/client.py Outdated Show resolved Hide resolved
google/cloud/bigtable/client.py Outdated Show resolved Hide resolved
google/cloud/bigtable/client.py Outdated Show resolved Hide resolved
Comment on lines 333 to 339
except RuntimeError:
warnings.warn(
"Table should be created in an asyncio event loop."
" Instance will not be registered with client for refresh",
RuntimeWarning,
stacklevel=2,
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this error instead of just warn? Being able to initialize a Client outside an async context could make sense but should we allow going as far as initializing a Table without being in an async context?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there are main use cases that have benefits where a user would want to initialize these outside of an async context and then manually register? Seems much more difficult and more work for the user.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I've been a little unsure about this too, but I think making the Table require an async context is probably a good middle ground. And that will allow us to change the registration methods into private methods

I'll make that change, thanks for the input

google/cloud/bigtable/client.py Outdated Show resolved Hide resolved
next_sleep = max(first_refresh - time.time(), 0)
if next_sleep > 0:
# warm the current channel immediately
channel = self.transport._grpc_channel._pool[channel_idx]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a couple places where pooled transport's _grpc_channel._pool private attribute is accessed. Perhaps make it public, or alternatively fold the functionality into the transport class itself as a public method. I get the impression that the transport classes intend to hide away most/some of these channel level management details from its user (in this case bigtable/client.py).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good points, I added pool_size and channels as public properties on the pooled transport, and am using them here. Thanks!

@daniel-sanche daniel-sanche merged commit f9a1907 into googleapis:v3 Apr 24, 2023
daniel-sanche added a commit that referenced this pull request Feb 5, 2024
* feat: add new v3.0.0 API skeleton (#745)

* feat: improve rows filters (#751)

* feat: read rows query model class (#752)

* feat: implement row and cell model classes (#753)

* feat: add pooled grpc transport (#748)

* feat: implement read_rows (#762)

* feat: implement mutate rows (#769)

* feat: literal value filter (#767)

* feat: row_exists and read_row (#778)

* feat: read_modify_write and check_and_mutate_row (#780)

* feat: sharded read rows (#766)

* feat: ping and warm with metadata (#810)

* feat: mutate rows batching (#770)

* chore: restructure module paths (#816)

* feat: improve timeout structure (#819)

* fix: api errors apply to all bulk mutations

* chore: reduce public api surface (#820)

* feat: improve error group tracebacks on < py11 (#825)

* feat: optimize read_rows (#852)

* chore: add user agent suffix (#842)

* feat: optimize retries (#854)

* feat: add test proxy (#836)

* chore(tests): add conformance tests to CI for v3 (#870)

* chore(tests): turn off fast fail for conformance tets (#882)

* feat: add TABLE_DEFAULTS enum for table method arguments (#880)

* fix: pass None for retry in gapic calls (#881)

* feat: replace internal dictionaries with protos in gapic calls (#875)

* chore: optimize gapic calls (#863)

* feat: expose retryable error codes to users (#879)

* chore: update api_core submodule (#897)

* chore: merge main into experimental_v3 (#900)

* chore: pin conformance tests to v0.0.2 (#903)

* fix: bulk mutation eventual success (#909)

---------

Co-authored-by: Owl Bot <gcf-owl-bot[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: bigtable Issues related to the googleapis/python-bigtable API. size: xl Pull request size is extra large.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants