From 32ef47f67e7e5e75b49674bf18120f40fae3f14a Mon Sep 17 00:00:00 2001 From: Dan O'Meara Date: Thu, 19 Nov 2020 13:03:10 -0800 Subject: [PATCH] feat!: use microgenerator (#76) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Adds tutorials using Cloud Client [(#930)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/930) * Adds tutorials. * Removes unused enumerate * Adds one more tutorial as well as fixes some copy/paste typos. [(#933)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/933) * Adds new examples, replaces markdown with restructured text [(#945)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/945) * Adds new examples, replaces markdown with restructured text * Address review feedback * Use videos from pubilc bucket, update to new client library. * Style nit * Updates requirements [(#952)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/952) * Fix README rst links [(#962)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/962) * Fix README rst links * Update all READMEs * change the usage file sample [(#958)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/958) since the file does not exist. Propose to use the same one as the tutorial: demomaker/gbikes_dinosaur.mp4 * Updates examples for video [(#968)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/968) * Auto-update dependencies. [(#1093)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/1093) * Auto-update dependencies. * Fix storage notification poll sample Change-Id: I6afbc79d15e050531555e4c8e51066996717a0f3 * Fix spanner samples Change-Id: I40069222c60d57e8f3d3878167591af9130895cb * Drop coverage because it's not useful Change-Id: Iae399a7083d7866c3c7b9162d0de244fbff8b522 * Try again to fix flaky logging test Change-Id: I6225c074701970c17c426677ef1935bb6d7e36b4 * Update all generated readme auth instructions [(#1121)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/1121) Change-Id: I03b5eaef8b17ac3dc3c0339fd2c7447bd3e11bd2 * Auto-update dependencies. [(#1123)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/1123) * Video v1beta2 [(#1088)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/1088) * update analyze_safe_search * update analyze_shots * update explicit_content_detection and test * update fece detection * update label detection (path) * update label detection (file) * flake * safe search --> explicit content * update faces tutorial * update client library quickstart * update shotchange tutorial * update labels tutorial * correct spelling * correction start_time_offset * import order * rebased * Added Link to Python Setup Guide [(#1158)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/1158) * Update Readme.rst to add Python setup guide As requested in b/64770713. This sample is linked in documentation https://cloud.google.com/bigtable/docs/scaling, and it would make more sense to update the guide here than in the documentation. * Update README.rst * Update README.rst * Update README.rst * Update README.rst * Update README.rst * Update install_deps.tmpl.rst * Updated readmegen scripts and re-generated related README files * Fixed the lint error * Tweak doc/help strings for sample tools [(#1160)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/1160) * Corrected copy-paste on doc string * Updated doc/help string to be more specific to labels tool * Made shotchange doc/help string more specific * Tweaked doc/help string to indicate no arg expected * Adjusted import order to satisfy flake8 * Wrapped doc string to 79 chars to flake8 correctly * Adjusted import order to pass flake8 test * Auto-update dependencies. [(#1186)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/1186) * update samples to v1 [(#1221)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/1221) * update samples to v1 * replace while loop with operation.result(timeout) * addressing review comments * flake * flake * Added "Open in Cloud Shell" buttons to README files [(#1254)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/1254) * Auto-update dependencies. [(#1377)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/1377) * Auto-update dependencies. * Update requirements.txt * Auto-update dependencies. * Regenerate the README files and fix the Open in Cloud Shell link for some samples [(#1441)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/1441) * Update READMEs to fix numbering and add git clone [(#1464)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/1464) * Video Intelligence region tag update [(#1639)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/1639) * Auto-update dependencies. [(#1658)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/1658) * Auto-update dependencies. * Rollback appengine/standard/bigquery/. * Rollback appengine/standard/iap/. * Rollback bigtable/metricscaler. * Rolledback appengine/flexible/datastore. * Rollback dataproc/ * Rollback jobs/api_client * Rollback vision/cloud-client. * Rollback functions/ocr/app. * Rollback iot/api-client/end_to_end_example. * Rollback storage/cloud-client. * Rollback kms/api-client. * Rollback dlp/ * Rollback bigquery/cloud-client. * Rollback iot/api-client/manager. * Rollback appengine/flexible/cloudsql_postgresql. * Use explicit URIs for Video Intelligence sample tests [(#1743)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/1743) * Auto-update dependencies. [(#1846)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/1846) ACK, merging. * Longer timeouts to address intermittent failures [(#1871)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/1871) * Auto-update dependencies. [(#1980)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/1980) * Auto-update dependencies. * Update requirements.txt * Update requirements.txt * replace demomaker with cloud-samples-data/video for video intelligenc… [(#2162)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/2162) * replace demomaker with cloud-samples-data/video for video intelligence samples * flake * Adds updates for samples profiler ... vision [(#2439)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/2439) * Auto-update dependencies. [(#2005)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/2005) * Auto-update dependencies. * Revert update of appengine/flexible/datastore. * revert update of appengine/flexible/scipy * revert update of bigquery/bqml * revert update of bigquery/cloud-client * revert update of bigquery/datalab-migration * revert update of bigtable/quickstart * revert update of compute/api * revert update of container_registry/container_analysis * revert update of dataflow/run_template * revert update of datastore/cloud-ndb * revert update of dialogflow/cloud-client * revert update of dlp * revert update of functions/imagemagick * revert update of functions/ocr/app * revert update of healthcare/api-client/fhir * revert update of iam/api-client * revert update of iot/api-client/gcs_file_to_device * revert update of iot/api-client/mqtt_example * revert update of language/automl * revert update of run/image-processing * revert update of vision/automl * revert update testing/requirements.txt * revert update of vision/cloud-client/detect * revert update of vision/cloud-client/product_search * revert update of jobs/v2/api_client * revert update of jobs/v3/api_client * revert update of opencensus * revert update of translate/cloud-client * revert update to speech/cloud-client Co-authored-by: Kurtis Van Gent <31518063+kurtisvg@users.noreply.github.com> Co-authored-by: Doug Mahugh * chore(deps): update dependency google-cloud-videointelligence to v1.14.0 [(#3169)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/3169) * Simplify noxfile setup. [(#2806)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/2806) * chore(deps): update dependency requests to v2.23.0 * Simplify noxfile and add version control. * Configure appengine/standard to only test Python 2.7. * Update Kokokro configs to match noxfile. * Add requirements-test to each folder. * Remove Py2 versions from everything execept appengine/standard. * Remove conftest.py. * Remove appengine/standard/conftest.py * Remove 'no-sucess-flaky-report' from pytest.ini. * Add GAE SDK back to appengine/standard tests. * Fix typo. * Roll pytest to python 2 version. * Add a bunch of testing requirements. * Remove typo. * Add appengine lib directory back in. * Add some additional requirements. * Fix issue with flake8 args. * Even more requirements. * Readd appengine conftest.py. * Add a few more requirements. * Even more Appengine requirements. * Add webtest for appengine/standard/mailgun. * Add some additional requirements. * Add workaround for issue with mailjet-rest. * Add responses for appengine/standard/mailjet. Co-authored-by: Renovate Bot * fix: changes positional to named pararameters in Video samples [(#4017)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/4017) Changes calls to `VideoClient.annotate_video()` so that GCS URIs are provided as named parameters. Example: ``` operation = video_client.annotate_video(path, features=features) ``` Becomes: ``` operation = video_client.annotate_video(input_uri=path, features=features) ``` * Update dependency google-cloud-videointelligence to v1.15.0 [(#4041)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/4041) This PR contains the following updates: | Package | Update | Change | |---|---|---| | [google-cloud-videointelligence](https://togithub.com/googleapis/python-videointelligence) | minor | `==1.14.0` -> `==1.15.0` | --- ### Release Notes
googleapis/python-videointelligence ### [`v1.15.0`](https://togithub.com/googleapis/python-videointelligence/blob/master/CHANGELOG.md#​1150-httpswwwgithubcomgoogleapispython-videointelligencecomparev1140v1150-2020-06-09) [Compare Source](https://togithub.com/googleapis/python-videointelligence/compare/v1.14.0...v1.15.0) ##### Features - add support for streaming automl action recognition in v1p3beta1; make 'features' a positional param for annotate_video in betas ([#​31](https://www.github.com/googleapis/python-videointelligence/issues/31)) ([586f920](https://www.github.com/googleapis/python-videointelligence/commit/586f920a1932e1a813adfed500502fba0ff5edb7)), closes [#​517](https://www.github.com/googleapis/python-videointelligence/issues/517) [#​538](https://www.github.com/googleapis/python-videointelligence/issues/538) [#​565](https://www.github.com/googleapis/python-videointelligence/issues/565) [#​576](https://www.github.com/googleapis/python-videointelligence/issues/576) [#​506](https://www.github.com/googleapis/python-videointelligence/issues/506) [#​586](https://www.github.com/googleapis/python-videointelligence/issues/586) [#​585](https://www.github.com/googleapis/python-videointelligence/issues/585)
--- ### Renovate configuration :date: **Schedule**: At any time (no schedule defined). :vertical_traffic_light: **Automerge**: Disabled by config. Please merge this manually once you are satisfied. :recycle: **Rebasing**: Never, or you tick the rebase/retry checkbox. :no_bell: **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] If you want to rebase/retry this PR, check this box --- This PR has been generated by [WhiteSource Renovate](https://renovate.whitesourcesoftware.com). View repository job log [here](https://app.renovatebot.com/dashboard#GoogleCloudPlatform/python-docs-samples). * chore(deps): update dependency pytest to v5.4.3 [(#4279)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/4279) * chore(deps): update dependency pytest to v5.4.3 * specify pytest for python 2 in appengine Co-authored-by: Leah Cole * Update dependency pytest to v6 [(#4390)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/4390) * chore: pin sphinx * chore: adds samples templates * chore: temporarily pins sphinx * chore: blacken noxfile * chore: lints * chore(deps): update dependency google-cloud-videointelligence to v1.16.0 [(#4798)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/4798) * chore: fixes flaky tests * chore(deps): update dependency pytest to v6.1.1 [(#4761)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/4761) * chore(deps): update dependency pytest to v6.1.2 [(#4921)](https://github.com/GoogleCloudPlatform/python-docs-samples/issues/4921) Co-authored-by: Charles Engelke * chore: updates samples templates * chore: cleans up merge conflicts * chore: blacken * feat!: use microgenerator * docs: update samples for microgenerator client * docs: updates shotchange samples to microgen * chore: deletes temp files * chore: lint and blacken * Update UPGRADING.md Co-authored-by: Bu Sun Kim <8822365+busunkim96@users.noreply.github.com> * Update setup.py Co-authored-by: Bu Sun Kim <8822365+busunkim96@users.noreply.github.com> Co-authored-by: Gus Class Co-authored-by: Bill Prin Co-authored-by: florencep Co-authored-by: DPE bot Co-authored-by: Jon Wayne Parrott Co-authored-by: Yu-Han Liu Co-authored-by: michaelawyu Co-authored-by: Perry Stoll Co-authored-by: Frank Natividad Co-authored-by: michaelawyu Co-authored-by: Alix Hamilton Co-authored-by: Charles Engelke Co-authored-by: Yu-Han Liu Co-authored-by: Kurtis Van Gent <31518063+kurtisvg@users.noreply.github.com> Co-authored-by: Doug Mahugh Co-authored-by: WhiteSource Renovate Co-authored-by: Eric Schmidt Co-authored-by: Leah Cole Co-authored-by: gcf-merge-on-green[bot] <60162190+gcf-merge-on-green[bot]@users.noreply.github.com> Co-authored-by: Charles Engelke Co-authored-by: Bu Sun Kim <8822365+busunkim96@users.noreply.github.com> --- .../.coveragerc | 17 +- .../google-cloud-videointelligence/README.rst | 9 +- .../UPGRADING.md | 154 + .../docs/UPGRADING.md | 1 + .../docs/gapic/v1/api.rst | 6 - .../docs/gapic/v1/types.rst | 6 - .../docs/gapic/v1beta2/api.rst | 6 - .../docs/gapic/v1p1beta1/api.rst | 6 - .../docs/gapic/v1p1beta1/types.rst | 6 - .../docs/gapic/v1p2beta1/api.rst | 6 - .../docs/gapic/v1p2beta1/types.rst | 6 - .../docs/gapic/v1p3beta1/api.rst | 6 - .../docs/gapic/v1p3beta1/types.rst | 6 - .../docs/index.rst | 39 +- .../docs/videointelligence_v1/services.rst | 6 + .../docs/videointelligence_v1/types.rst | 6 + .../videointelligence_v1beta2/services.rst | 6 + .../types.rst | 4 +- .../videointelligence_v1p1beta1/services.rst | 6 + .../videointelligence_v1p1beta1/types.rst | 6 + .../videointelligence_v1p2beta1/services.rst | 6 + .../videointelligence_v1p2beta1/types.rst | 6 + .../videointelligence_v1p3beta1/services.rst | 9 + .../videointelligence_v1p3beta1/types.rst | 6 + .../cloud/videointelligence/__init__.py | 170 + .../google/cloud/videointelligence/py.typed | 2 + .../cloud/videointelligence_v1/__init__.py | 124 +- .../videointelligence_v1/gapic/__init__.py | 0 .../cloud/videointelligence_v1/gapic/enums.py | 86 - .../gapic/transports/__init__.py | 0 ...deo_intelligence_service_grpc_transport.py | 137 - .../video_intelligence_service_client.py | 310 - ...ideo_intelligence_service_client_config.py | 37 - .../videointelligence_v1/proto/__init__.py | 0 .../proto/video_intelligence.proto | 900 --- .../proto/video_intelligence_pb2.py | 5444 ------------- .../proto/video_intelligence_pb2_grpc.py | 92 - .../cloud/videointelligence_v1/py.typed | 2 + .../services}/__init__.py | 14 +- .../video_intelligence_service/__init__.py} | 17 +- .../async_client.py | 270 + .../video_intelligence_service/client.py | 438 ++ .../transports/__init__.py | 38 + .../transports/base.py | 141 + .../transports/grpc.py | 281 + .../transports/grpc_asyncio.py | 287 + .../cloud/videointelligence_v1/types.py | 56 - .../videointelligence_v1/types/__init__.py | 109 + .../types/video_intelligence.py | 1344 ++++ .../videointelligence_v1beta2/__init__.py | 82 +- .../gapic/__init__.py | 0 .../videointelligence_v1beta2/gapic/enums.py | 76 - .../gapic/transports/__init__.py | 0 ...deo_intelligence_service_grpc_transport.py | 137 - .../video_intelligence_service_client.py | 307 - ...ideo_intelligence_service_client_config.py | 37 - .../proto/__init__.py | 0 .../proto/video_intelligence.proto | 408 - .../proto/video_intelligence_pb2.py | 2451 ------ .../proto/video_intelligence_pb2_grpc.py | 92 - .../cloud/videointelligence_v1beta2/py.typed | 2 + .../services}/__init__.py | 14 +- .../video_intelligence_service/__init__.py | 24 + .../async_client.py | 270 + .../video_intelligence_service/client.py | 438 ++ .../transports/__init__.py | 38 + .../transports/base.py | 141 + .../transports/grpc.py | 281 + .../transports/grpc_asyncio.py | 287 + .../cloud/videointelligence_v1beta2/types.py | 56 - .../types/__init__.py | 65 + .../types/video_intelligence.py | 577 ++ .../videointelligence_v1p1beta1/__init__.py | 82 +- .../gapic/__init__.py | 0 .../gapic/enums.py | 76 - .../gapic/transports/__init__.py | 0 ...deo_intelligence_service_grpc_transport.py | 137 - .../video_intelligence_service_client.py | 309 - ...ideo_intelligence_service_client_config.py | 37 - .../proto/__init__.py | 0 .../proto/video_intelligence.proto | 444 -- .../proto/video_intelligence_pb2.py | 2559 ------- .../proto/video_intelligence_pb2_grpc.py | 92 - .../videointelligence_v1p1beta1/py.typed | 2 + .../services/__init__.py | 16 + .../video_intelligence_service/__init__.py | 24 + .../async_client.py | 270 + .../video_intelligence_service/client.py | 438 ++ .../transports/__init__.py | 38 + .../transports/base.py | 141 + .../transports/grpc.py | 281 + .../transports/grpc_asyncio.py | 287 + .../videointelligence_v1p1beta1/types.py | 56 - .../types/__init__.py | 65 + .../types/video_intelligence.py | 636 ++ .../videointelligence_v1p2beta1/__init__.py | 90 +- .../gapic/__init__.py | 0 .../gapic/enums.py | 78 - .../gapic/transports/__init__.py | 0 ...deo_intelligence_service_grpc_transport.py | 137 - .../video_intelligence_service_client.py | 309 - ...ideo_intelligence_service_client_config.py | 37 - .../proto/__init__.py | 0 .../proto/video_intelligence.proto | 476 -- .../proto/video_intelligence_pb2.py | 2853 ------- .../proto/video_intelligence_pb2_grpc.py | 92 - .../videointelligence_v1p2beta1/py.typed | 2 + .../services/__init__.py | 16 + .../video_intelligence_service/__init__.py | 24 + .../async_client.py | 270 + .../video_intelligence_service/client.py | 438 ++ .../transports/__init__.py | 38 + .../transports/base.py | 141 + .../transports/grpc.py | 281 + .../transports/grpc_asyncio.py | 287 + .../videointelligence_v1p2beta1/types.py | 56 - .../types/__init__.py | 73 + .../types/video_intelligence.py | 687 ++ .../videointelligence_v1p3beta1/__init__.py | 167 +- .../gapic/__init__.py | 0 .../gapic/enums.py | 113 - ...aming_video_intelligence_service_client.py | 252 - ...ideo_intelligence_service_client_config.py | 47 - .../gapic/transports/__init__.py | 0 ...deo_intelligence_service_grpc_transport.py | 130 - ...deo_intelligence_service_grpc_transport.py | 137 - .../video_intelligence_service_client.py | 309 - ...ideo_intelligence_service_client_config.py | 37 - .../proto/__init__.py | 0 .../proto/video_intelligence.proto | 1089 --- .../proto/video_intelligence_pb2.py | 6711 ----------------- .../proto/video_intelligence_pb2_grpc.py | 173 - .../videointelligence_v1p3beta1/py.typed | 2 + .../services/__init__.py | 16 + .../__init__.py | 24 + .../async_client.py | 239 + .../client.py | 397 + .../transports/__init__.py | 40 + .../transports/base.py | 137 + .../transports/grpc.py | 267 + .../transports/grpc_asyncio.py | 271 + .../video_intelligence_service/__init__.py | 24 + .../async_client.py | 270 + .../video_intelligence_service/client.py | 438 ++ .../transports/__init__.py | 38 + .../transports/base.py | 141 + .../transports/grpc.py | 281 + .../transports/grpc_asyncio.py | 287 + .../videointelligence_v1p3beta1/types.py | 56 - .../types/__init__.py | 133 + .../types/video_intelligence.py | 1638 ++++ .../google-cloud-videointelligence/mypy.ini | 3 + .../google-cloud-videointelligence/noxfile.py | 8 +- .../samples/analyze/analyze.py | 129 +- .../samples/analyze/analyze_test.py | 4 +- .../samples/analyze/beta_snippets.py | 161 +- .../samples/analyze/beta_snippets_test.py | 8 +- .../samples/analyze/noxfile.py | 34 +- .../analyze/video_detect_faces_beta.py | 16 +- .../analyze/video_detect_faces_gcs_beta.py | 16 +- .../samples/analyze/video_detect_logo.py | 16 +- .../samples/analyze/video_detect_logo_gcs.py | 16 +- .../analyze/video_detect_person_beta.py | 12 +- .../analyze/video_detect_person_gcs_beta.py | 12 +- .../samples/labels/labels.py | 47 +- .../samples/labels/labels_test.py | 4 +- .../samples/labels/noxfile.py | 34 +- .../samples/quickstart/noxfile.py | 34 +- .../samples/quickstart/quickstart.py | 11 +- .../samples/shotchange/noxfile.py | 35 +- .../samples/shotchange/shotchange.py | 32 +- .../samples/shotchange/shotchange_test.py | 5 +- .../scripts/fixup_keywords.py | 179 + .../google-cloud-videointelligence/setup.py | 15 +- .../synth.metadata | 10 +- .../google-cloud-videointelligence/synth.py | 80 +- .../tests/system/test_system.py | 2 +- ...st_video_intelligence_service_client_v1.py | 123 - ...deo_intelligence_service_client_v1beta2.py | 123 - ...o_intelligence_service_client_v1p1beta1.py | 123 - ...o_intelligence_service_client_v1p2beta1.py | 123 - ...o_intelligence_service_client_v1p3beta1.py | 115 - ...o_intelligence_service_client_v1p3beta1.py | 123 - .../gapic/videointelligence_v1/__init__.py | 1 + .../test_video_intelligence_service.py | 1079 +++ .../videointelligence_v1beta2/__init__.py | 1 + .../test_video_intelligence_service.py | 1079 +++ .../videointelligence_v1p1beta1/__init__.py | 1 + .../test_video_intelligence_service.py | 1079 +++ .../videointelligence_v1p2beta1/__init__.py | 1 + .../test_video_intelligence_service.py | 1079 +++ .../videointelligence_v1p3beta1/__init__.py | 1 + ...st_streaming_video_intelligence_service.py | 996 +++ .../test_video_intelligence_service.py | 1079 +++ 194 files changed, 21890 insertions(+), 28815 deletions(-) create mode 100644 packages/google-cloud-videointelligence/UPGRADING.md create mode 120000 packages/google-cloud-videointelligence/docs/UPGRADING.md delete mode 100644 packages/google-cloud-videointelligence/docs/gapic/v1/api.rst delete mode 100644 packages/google-cloud-videointelligence/docs/gapic/v1/types.rst delete mode 100644 packages/google-cloud-videointelligence/docs/gapic/v1beta2/api.rst delete mode 100644 packages/google-cloud-videointelligence/docs/gapic/v1p1beta1/api.rst delete mode 100644 packages/google-cloud-videointelligence/docs/gapic/v1p1beta1/types.rst delete mode 100644 packages/google-cloud-videointelligence/docs/gapic/v1p2beta1/api.rst delete mode 100644 packages/google-cloud-videointelligence/docs/gapic/v1p2beta1/types.rst delete mode 100644 packages/google-cloud-videointelligence/docs/gapic/v1p3beta1/api.rst delete mode 100644 packages/google-cloud-videointelligence/docs/gapic/v1p3beta1/types.rst create mode 100644 packages/google-cloud-videointelligence/docs/videointelligence_v1/services.rst create mode 100644 packages/google-cloud-videointelligence/docs/videointelligence_v1/types.rst create mode 100644 packages/google-cloud-videointelligence/docs/videointelligence_v1beta2/services.rst rename packages/google-cloud-videointelligence/docs/{gapic/v1beta2 => videointelligence_v1beta2}/types.rst (62%) create mode 100644 packages/google-cloud-videointelligence/docs/videointelligence_v1p1beta1/services.rst create mode 100644 packages/google-cloud-videointelligence/docs/videointelligence_v1p1beta1/types.rst create mode 100644 packages/google-cloud-videointelligence/docs/videointelligence_v1p2beta1/services.rst create mode 100644 packages/google-cloud-videointelligence/docs/videointelligence_v1p2beta1/types.rst create mode 100644 packages/google-cloud-videointelligence/docs/videointelligence_v1p3beta1/services.rst create mode 100644 packages/google-cloud-videointelligence/docs/videointelligence_v1p3beta1/types.rst create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence/__init__.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence/py.typed delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/gapic/__init__.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/gapic/enums.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/gapic/transports/__init__.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/gapic/transports/video_intelligence_service_grpc_transport.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/gapic/video_intelligence_service_client.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/gapic/video_intelligence_service_client_config.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/proto/__init__.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/proto/video_intelligence.proto delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/proto/video_intelligence_pb2.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/proto/video_intelligence_pb2_grpc.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/py.typed rename packages/google-cloud-videointelligence/google/{ => cloud/videointelligence_v1/services}/__init__.py (71%) rename packages/google-cloud-videointelligence/google/cloud/{videointelligence.py => videointelligence_v1/services/video_intelligence_service/__init__.py} (67%) create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/async_client.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/client.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/transports/__init__.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/transports/base.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/transports/grpc.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/transports/grpc_asyncio.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/types.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/types/__init__.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/types/video_intelligence.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/gapic/__init__.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/gapic/enums.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/gapic/transports/__init__.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/gapic/transports/video_intelligence_service_grpc_transport.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/gapic/video_intelligence_service_client.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/gapic/video_intelligence_service_client_config.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/proto/__init__.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/proto/video_intelligence.proto delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/proto/video_intelligence_pb2.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/proto/video_intelligence_pb2_grpc.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/py.typed rename packages/google-cloud-videointelligence/google/cloud/{ => videointelligence_v1beta2/services}/__init__.py (71%) create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/__init__.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/async_client.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/client.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/transports/__init__.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/transports/base.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/transports/grpc.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/transports/grpc_asyncio.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/types.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/types/__init__.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/types/video_intelligence.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/gapic/__init__.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/gapic/enums.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/gapic/transports/__init__.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/gapic/transports/video_intelligence_service_grpc_transport.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/gapic/video_intelligence_service_client.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/gapic/video_intelligence_service_client_config.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/proto/__init__.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/proto/video_intelligence.proto delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/proto/video_intelligence_pb2.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/proto/video_intelligence_pb2_grpc.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/py.typed create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/__init__.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/__init__.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/async_client.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/client.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/transports/__init__.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/transports/base.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/transports/grpc.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/transports/grpc_asyncio.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/types.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/types/__init__.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/types/video_intelligence.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/gapic/__init__.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/gapic/enums.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/gapic/transports/__init__.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/gapic/transports/video_intelligence_service_grpc_transport.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/gapic/video_intelligence_service_client.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/gapic/video_intelligence_service_client_config.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/proto/__init__.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/proto/video_intelligence.proto delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/proto/video_intelligence_pb2.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/proto/video_intelligence_pb2_grpc.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/py.typed create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/__init__.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/__init__.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/async_client.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/client.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/transports/__init__.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/transports/base.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/transports/grpc.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/transports/grpc_asyncio.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/types.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/types/__init__.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/types/video_intelligence.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/__init__.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/enums.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/streaming_video_intelligence_service_client.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/streaming_video_intelligence_service_client_config.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/transports/__init__.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/transports/streaming_video_intelligence_service_grpc_transport.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/transports/video_intelligence_service_grpc_transport.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/video_intelligence_service_client.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/video_intelligence_service_client_config.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/proto/__init__.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/proto/video_intelligence.proto delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/proto/video_intelligence_pb2.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/proto/video_intelligence_pb2_grpc.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/py.typed create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/__init__.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/__init__.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/async_client.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/client.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/transports/__init__.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/transports/base.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/transports/grpc.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/transports/grpc_asyncio.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/__init__.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/async_client.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/client.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/transports/__init__.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/transports/base.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/transports/grpc.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/transports/grpc_asyncio.py delete mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/types.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/types/__init__.py create mode 100644 packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/types/video_intelligence.py create mode 100644 packages/google-cloud-videointelligence/mypy.ini create mode 100644 packages/google-cloud-videointelligence/scripts/fixup_keywords.py delete mode 100644 packages/google-cloud-videointelligence/tests/unit/gapic/v1/test_video_intelligence_service_client_v1.py delete mode 100644 packages/google-cloud-videointelligence/tests/unit/gapic/v1beta2/test_video_intelligence_service_client_v1beta2.py delete mode 100644 packages/google-cloud-videointelligence/tests/unit/gapic/v1p1beta1/test_video_intelligence_service_client_v1p1beta1.py delete mode 100644 packages/google-cloud-videointelligence/tests/unit/gapic/v1p2beta1/test_video_intelligence_service_client_v1p2beta1.py delete mode 100644 packages/google-cloud-videointelligence/tests/unit/gapic/v1p3beta1/test_streaming_video_intelligence_service_client_v1p3beta1.py delete mode 100644 packages/google-cloud-videointelligence/tests/unit/gapic/v1p3beta1/test_video_intelligence_service_client_v1p3beta1.py create mode 100644 packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1/__init__.py create mode 100644 packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1/test_video_intelligence_service.py create mode 100644 packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1beta2/__init__.py create mode 100644 packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1beta2/test_video_intelligence_service.py create mode 100644 packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p1beta1/__init__.py create mode 100644 packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p1beta1/test_video_intelligence_service.py create mode 100644 packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p2beta1/__init__.py create mode 100644 packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p2beta1/test_video_intelligence_service.py create mode 100644 packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p3beta1/__init__.py create mode 100644 packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p3beta1/test_streaming_video_intelligence_service.py create mode 100644 packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p3beta1/test_video_intelligence_service.py diff --git a/packages/google-cloud-videointelligence/.coveragerc b/packages/google-cloud-videointelligence/.coveragerc index 0d8e6297dc9c..bcbee4685022 100644 --- a/packages/google-cloud-videointelligence/.coveragerc +++ b/packages/google-cloud-videointelligence/.coveragerc @@ -14,25 +14,20 @@ # See the License for the specific language governing permissions and # limitations under the License. -# Generated by synthtool. DO NOT EDIT! [run] branch = True -omit = - google/cloud/__init__.py [report] fail_under = 100 show_missing = True +omit = google/cloud/texttospeech/__init__.py exclude_lines = # Re-enable the standard pragma pragma: NO COVER # Ignore debug-only repr def __repr__ - # Ignore abstract methods - raise NotImplementedError -omit = - */gapic/*.py - */proto/*.py - */core/*.py - */site-packages/*.py - google/cloud/__init__.py + # Ignore pkg_resources exceptions. + # This is added at the module level as a safeguard for if someone + # generates the code and tries to run it without pip installing. This + # makes it virtually impossible to test properly. + except pkg_resources.DistributionNotFound \ No newline at end of file diff --git a/packages/google-cloud-videointelligence/README.rst b/packages/google-cloud-videointelligence/README.rst index 4c99dcff3e0c..c60a10370b53 100644 --- a/packages/google-cloud-videointelligence/README.rst +++ b/packages/google-cloud-videointelligence/README.rst @@ -56,12 +56,13 @@ dependencies. Supported Python Versions ^^^^^^^^^^^^^^^^^^^^^^^^^ -Python >= 3.5 +Python >= 3.6 -Deprecated Python Versions -^^^^^^^^^^^^^^^^^^^^^^^^^^ -Python == 2.7. Python 2.7 support will be removed on January 1, 2020. +Unsupported Python Versions +^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Python == 2.7. +The last version of this library compatible with Python 2.7 is google-cloud-videointelligence==1.17.0. Mac/Linux ^^^^^^^^^ diff --git a/packages/google-cloud-videointelligence/UPGRADING.md b/packages/google-cloud-videointelligence/UPGRADING.md new file mode 100644 index 000000000000..a18969733383 --- /dev/null +++ b/packages/google-cloud-videointelligence/UPGRADING.md @@ -0,0 +1,154 @@ +# 2.0.0 Migration Guide + +The 2.0 release of the `google-cloud-videointelligence` client is a significant upgrade based on a [next-gen code generator](https://github.com/googleapis/gapic-generator-python), and includes substantial interface changes. Existing code written for earlier versions of this library will likely require updates to use this version. This document describes the changes that have been made, and what you need to do to update your usage. + +If you experience issues or have questions, please file an [issue](https://github.com/googleapis/python-videointelligence/issues). + +## Supported Python Versions + +> **WARNING**: Breaking change + +The 2.0.0 release requires Python 3.6+. + + +## Method Calls + +> **WARNING**: Breaking change + +Methods expect request objects. We provide a script that will convert most common use cases. + +* Install the library + +```py +python3 -m pip install google-cloud-videointelligence +``` + +* The script `fixup_keywords.py` is shipped with the library. It expects +an input directory (with the code to convert) and an empty destination directory. + +```sh +$ fixup_keywords.py --input-directory .samples/ --output-directory samples/ +``` + +**Before:** +```py +from google.cloud import videointelligence + +client = videointelligence.VideoIntelligenceServiceClient() +input_uri = "gs://cloud-samples-data/video/cat.mp4" +features = [videointelligence.enums.Feature.LABEL_DETECTION] +operation = client.annotate_video( + input_uri=input_uri, features=features +) +``` + + +**After:** +```py +from google.cloud import videointelligence + +client = videointelligence.VideoIntelligenceServiceClient() +input_uri = "gs://cloud-samples-data/video/cat.mp4" +features = [videointelligence.Feature.LABEL_DETECTION] +operation = client.annotate_video(request={"input_uri": input_uri, "features": features}) +``` + +### More Details + +In `google-cloud-videointelligence<2.0.0`, parameters required by the API were positional parameters and optional parameters were keyword parameters. + +**Before:** +```py + def annotate_video( + self, + input_uri=None, + input_content=None, + features=None, + video_context=None, + output_uri=None, + location_id=None, + retry=google.api_core.gapic_v1.method.DEFAULT, + timeout=google.api_core.gapic_v1.method.DEFAULT, + metadata=None, + ): +``` + +In the 2.0.0 release, all methods have a single positional parameter `request`. Method docstrings indicate whether a parameter is required or optional. + +Some methods have additional keyword only parameters. The available parameters depend on the [`google.api.method_signature` annotation](https://github.com/googleapis/googleapis/blob/master/google/cloud/videointelligence/v1/video_intelligence.proto#L51) specified by the API producer. + + +**After:** +```py + def annotate_video( + self, + request: video_intelligence.AnnotateVideoRequest = None, + *, + input_uri: str = None, + features: Sequence[video_intelligence.Feature] = None, + retry: retries.Retry = gapic_v1.method.DEFAULT, + timeout: float = None, + metadata: Sequence[Tuple[str, str]] = (), + ) -> operation.Operation: +``` + +> **NOTE:** The `request` parameter and flattened keyword parameters for the API are mutually exclusive. +> Passing both will result in an error. + + +Both of these calls are valid: + +```py +response = client.annotate_video( + request={ + "input_uri": input_uri, + "features": features + } +) +``` + +```py +response = client.annotate_video( + input_uri=input_uri + features=features +) +``` + +This call is invalid because it mixes `request` with a keyword argument `features`. Executing this code +will result in an error. + +```py +response = client.annotate_video( + request={ + "input_uri": input_uri + }, + features=features +) +``` + + + +## Enums and Types + + +> **WARNING**: Breaking change + +The submodules `enums` and `types` have been removed. + +**Before:** +```py + +from google.cloud import videointelligence + +features = [videointelligence.enums.Feature.TEXT_DETECTION] +video_context = videointelligence.types.VideoContext() +``` + + +**After:** +```py +from google.cloud import videointelligence + +features = [videointelligence.Feature.TEXT_DETECTION] +video_context = videointelligence.VideoContext() +``` diff --git a/packages/google-cloud-videointelligence/docs/UPGRADING.md b/packages/google-cloud-videointelligence/docs/UPGRADING.md new file mode 120000 index 000000000000..01097c8c0fb8 --- /dev/null +++ b/packages/google-cloud-videointelligence/docs/UPGRADING.md @@ -0,0 +1 @@ +../UPGRADING.md \ No newline at end of file diff --git a/packages/google-cloud-videointelligence/docs/gapic/v1/api.rst b/packages/google-cloud-videointelligence/docs/gapic/v1/api.rst deleted file mode 100644 index cfc34f438f49..000000000000 --- a/packages/google-cloud-videointelligence/docs/gapic/v1/api.rst +++ /dev/null @@ -1,6 +0,0 @@ -Client for Cloud Video Intelligence API -======================================= - -.. automodule:: google.cloud.videointelligence_v1 - :members: - :inherited-members: \ No newline at end of file diff --git a/packages/google-cloud-videointelligence/docs/gapic/v1/types.rst b/packages/google-cloud-videointelligence/docs/gapic/v1/types.rst deleted file mode 100644 index d003b9caa2f8..000000000000 --- a/packages/google-cloud-videointelligence/docs/gapic/v1/types.rst +++ /dev/null @@ -1,6 +0,0 @@ -Types for Cloud Video Intelligence API Client -============================================= - -.. automodule:: google.cloud.videointelligence_v1.types - :members: - :noindex: \ No newline at end of file diff --git a/packages/google-cloud-videointelligence/docs/gapic/v1beta2/api.rst b/packages/google-cloud-videointelligence/docs/gapic/v1beta2/api.rst deleted file mode 100644 index 006f41313e22..000000000000 --- a/packages/google-cloud-videointelligence/docs/gapic/v1beta2/api.rst +++ /dev/null @@ -1,6 +0,0 @@ -Client for Google Cloud Video Intelligence API -============================================== - -.. automodule:: google.cloud.videointelligence_v1beta2 - :members: - :inherited-members: \ No newline at end of file diff --git a/packages/google-cloud-videointelligence/docs/gapic/v1p1beta1/api.rst b/packages/google-cloud-videointelligence/docs/gapic/v1p1beta1/api.rst deleted file mode 100644 index 5774850d626b..000000000000 --- a/packages/google-cloud-videointelligence/docs/gapic/v1p1beta1/api.rst +++ /dev/null @@ -1,6 +0,0 @@ -Client for Cloud Video Intelligence API -======================================= - -.. automodule:: google.cloud.videointelligence_v1p1beta1 - :members: - :inherited-members: \ No newline at end of file diff --git a/packages/google-cloud-videointelligence/docs/gapic/v1p1beta1/types.rst b/packages/google-cloud-videointelligence/docs/gapic/v1p1beta1/types.rst deleted file mode 100644 index e18e529a4a9c..000000000000 --- a/packages/google-cloud-videointelligence/docs/gapic/v1p1beta1/types.rst +++ /dev/null @@ -1,6 +0,0 @@ -Types for Cloud Video Intelligence API Client -============================================= - -.. automodule:: google.cloud.videointelligence_v1p1beta1.types - :members: - :noindex: \ No newline at end of file diff --git a/packages/google-cloud-videointelligence/docs/gapic/v1p2beta1/api.rst b/packages/google-cloud-videointelligence/docs/gapic/v1p2beta1/api.rst deleted file mode 100644 index 0dafdd894ab6..000000000000 --- a/packages/google-cloud-videointelligence/docs/gapic/v1p2beta1/api.rst +++ /dev/null @@ -1,6 +0,0 @@ -Client for Cloud Video Intelligence API -======================================= - -.. automodule:: google.cloud.videointelligence_v1p2beta1 - :members: - :inherited-members: \ No newline at end of file diff --git a/packages/google-cloud-videointelligence/docs/gapic/v1p2beta1/types.rst b/packages/google-cloud-videointelligence/docs/gapic/v1p2beta1/types.rst deleted file mode 100644 index 2db5249be9a0..000000000000 --- a/packages/google-cloud-videointelligence/docs/gapic/v1p2beta1/types.rst +++ /dev/null @@ -1,6 +0,0 @@ -Types for Cloud Video Intelligence API Client -============================================= - -.. automodule:: google.cloud.videointelligence_v1p2beta1.types - :members: - :noindex: \ No newline at end of file diff --git a/packages/google-cloud-videointelligence/docs/gapic/v1p3beta1/api.rst b/packages/google-cloud-videointelligence/docs/gapic/v1p3beta1/api.rst deleted file mode 100644 index 570eaffa4dc3..000000000000 --- a/packages/google-cloud-videointelligence/docs/gapic/v1p3beta1/api.rst +++ /dev/null @@ -1,6 +0,0 @@ -Client for Cloud Video Intelligence API -======================================= - -.. automodule:: google.cloud.videointelligence_v1p3beta1 - :members: - :inherited-members: \ No newline at end of file diff --git a/packages/google-cloud-videointelligence/docs/gapic/v1p3beta1/types.rst b/packages/google-cloud-videointelligence/docs/gapic/v1p3beta1/types.rst deleted file mode 100644 index e6f2094e2025..000000000000 --- a/packages/google-cloud-videointelligence/docs/gapic/v1p3beta1/types.rst +++ /dev/null @@ -1,6 +0,0 @@ -Types for Cloud Video Intelligence API Client -============================================= - -.. automodule:: google.cloud.videointelligence_v1p3beta1.types - :members: - :noindex: \ No newline at end of file diff --git a/packages/google-cloud-videointelligence/docs/index.rst b/packages/google-cloud-videointelligence/docs/index.rst index a7b14385f846..f421726724cf 100644 --- a/packages/google-cloud-videointelligence/docs/index.rst +++ b/packages/google-cloud-videointelligence/docs/index.rst @@ -12,12 +12,12 @@ API. By default, you will get ``v1``, the latest stable version. .. toctree:: :maxdepth: 2 - gapic/v1/api - gapic/v1/types + videointelligence_v1/services + videointelligence_v1/types -A new beta release with additional features over the current stable version, -spelled ``v1p3beta1``, is provided to allow you to use these new features. -These are expected to move into the stable release soon; until then, the +A beta release with additional features over the current stable version, +spelled ``v1p3beta1``, is also provided. +These are expected to move into the stable release; until then, the usual beta admonishment (changes are possible, etc.) applies. An API and type reference is provided for this beta: @@ -25,8 +25,8 @@ An API and type reference is provided for this beta: .. toctree:: :maxdepth: 2 - gapic/v1p3beta1/api - gapic/v1p3beta1/types + videointelligence_v1p3beta1/services + videointelligence_v1p3beta1/types The previous beta releases, spelled ``v1p2beta1``, ``v1p1beta1``, and ``v1beta2``, are provided to continue to support code previously written @@ -34,17 +34,28 @@ against them. In order to use ththem, you will want to import from e.g. ``google.cloud.videointelligence_v1beta2`` in lieu of ``google.cloud.videointelligence_v1``. -An API and type reference is provided the these betas also: +An API and type reference is provided for these betas also: .. toctree:: :maxdepth: 2 - gapic/v1p2beta1/api - gapic/v1p2beta1/types - gapic/v1p1beta1/api - gapic/v1p1beta1/types - gapic/v1beta2/api - gapic/v1beta2/types + videointelligence_v1p2beta1/services + videointelligence_v1p2beta1/types + videointelligence_v1p1beta1/services + videointelligence_v1p1beta1/types + videointelligence_v1beta2/services + videointelligence_v1beta2/types + + +Migration Guide +--------------- + +See the guide below for instructions on migrating to the 2.x release of this library. + +.. toctree:: + :maxdepth: 2 + + UPGRADING Changelog diff --git a/packages/google-cloud-videointelligence/docs/videointelligence_v1/services.rst b/packages/google-cloud-videointelligence/docs/videointelligence_v1/services.rst new file mode 100644 index 000000000000..8f9db9bb84a5 --- /dev/null +++ b/packages/google-cloud-videointelligence/docs/videointelligence_v1/services.rst @@ -0,0 +1,6 @@ +Services for Google Cloud Videointelligence v1 API +================================================== + +.. automodule:: google.cloud.videointelligence_v1.services.video_intelligence_service + :members: + :inherited-members: diff --git a/packages/google-cloud-videointelligence/docs/videointelligence_v1/types.rst b/packages/google-cloud-videointelligence/docs/videointelligence_v1/types.rst new file mode 100644 index 000000000000..b5d39277f4fd --- /dev/null +++ b/packages/google-cloud-videointelligence/docs/videointelligence_v1/types.rst @@ -0,0 +1,6 @@ +Types for Google Cloud Videointelligence v1 API +=============================================== + +.. automodule:: google.cloud.videointelligence_v1.types + :members: + :show-inheritance: diff --git a/packages/google-cloud-videointelligence/docs/videointelligence_v1beta2/services.rst b/packages/google-cloud-videointelligence/docs/videointelligence_v1beta2/services.rst new file mode 100644 index 000000000000..751201903467 --- /dev/null +++ b/packages/google-cloud-videointelligence/docs/videointelligence_v1beta2/services.rst @@ -0,0 +1,6 @@ +Services for Google Cloud Videointelligence v1beta2 API +======================================================= + +.. automodule:: google.cloud.videointelligence_v1beta2.services.video_intelligence_service + :members: + :inherited-members: diff --git a/packages/google-cloud-videointelligence/docs/gapic/v1beta2/types.rst b/packages/google-cloud-videointelligence/docs/videointelligence_v1beta2/types.rst similarity index 62% rename from packages/google-cloud-videointelligence/docs/gapic/v1beta2/types.rst rename to packages/google-cloud-videointelligence/docs/videointelligence_v1beta2/types.rst index 4e1f5f6c168d..47f60c25e451 100644 --- a/packages/google-cloud-videointelligence/docs/gapic/v1beta2/types.rst +++ b/packages/google-cloud-videointelligence/docs/videointelligence_v1beta2/types.rst @@ -1,6 +1,6 @@ -Types for Google Cloud Video Intelligence API Client +Types for Google Cloud Videointelligence v1beta2 API ==================================================== .. automodule:: google.cloud.videointelligence_v1beta2.types :members: - :noindex: \ No newline at end of file + :show-inheritance: diff --git a/packages/google-cloud-videointelligence/docs/videointelligence_v1p1beta1/services.rst b/packages/google-cloud-videointelligence/docs/videointelligence_v1p1beta1/services.rst new file mode 100644 index 000000000000..51e86ba2b691 --- /dev/null +++ b/packages/google-cloud-videointelligence/docs/videointelligence_v1p1beta1/services.rst @@ -0,0 +1,6 @@ +Services for Google Cloud Videointelligence v1p1beta1 API +========================================================= + +.. automodule:: google.cloud.videointelligence_v1p1beta1.services.video_intelligence_service + :members: + :inherited-members: diff --git a/packages/google-cloud-videointelligence/docs/videointelligence_v1p1beta1/types.rst b/packages/google-cloud-videointelligence/docs/videointelligence_v1p1beta1/types.rst new file mode 100644 index 000000000000..e9c681c58510 --- /dev/null +++ b/packages/google-cloud-videointelligence/docs/videointelligence_v1p1beta1/types.rst @@ -0,0 +1,6 @@ +Types for Google Cloud Videointelligence v1p1beta1 API +====================================================== + +.. automodule:: google.cloud.videointelligence_v1p1beta1.types + :members: + :show-inheritance: diff --git a/packages/google-cloud-videointelligence/docs/videointelligence_v1p2beta1/services.rst b/packages/google-cloud-videointelligence/docs/videointelligence_v1p2beta1/services.rst new file mode 100644 index 000000000000..f4258599b380 --- /dev/null +++ b/packages/google-cloud-videointelligence/docs/videointelligence_v1p2beta1/services.rst @@ -0,0 +1,6 @@ +Services for Google Cloud Videointelligence v1p2beta1 API +========================================================= + +.. automodule:: google.cloud.videointelligence_v1p2beta1.services.video_intelligence_service + :members: + :inherited-members: diff --git a/packages/google-cloud-videointelligence/docs/videointelligence_v1p2beta1/types.rst b/packages/google-cloud-videointelligence/docs/videointelligence_v1p2beta1/types.rst new file mode 100644 index 000000000000..30c5462f5f14 --- /dev/null +++ b/packages/google-cloud-videointelligence/docs/videointelligence_v1p2beta1/types.rst @@ -0,0 +1,6 @@ +Types for Google Cloud Videointelligence v1p2beta1 API +====================================================== + +.. automodule:: google.cloud.videointelligence_v1p2beta1.types + :members: + :show-inheritance: diff --git a/packages/google-cloud-videointelligence/docs/videointelligence_v1p3beta1/services.rst b/packages/google-cloud-videointelligence/docs/videointelligence_v1p3beta1/services.rst new file mode 100644 index 000000000000..0a2b15bd3a6c --- /dev/null +++ b/packages/google-cloud-videointelligence/docs/videointelligence_v1p3beta1/services.rst @@ -0,0 +1,9 @@ +Services for Google Cloud Videointelligence v1p3beta1 API +========================================================= + +.. automodule:: google.cloud.videointelligence_v1p3beta1.services.streaming_video_intelligence_service + :members: + :inherited-members: +.. automodule:: google.cloud.videointelligence_v1p3beta1.services.video_intelligence_service + :members: + :inherited-members: diff --git a/packages/google-cloud-videointelligence/docs/videointelligence_v1p3beta1/types.rst b/packages/google-cloud-videointelligence/docs/videointelligence_v1p3beta1/types.rst new file mode 100644 index 000000000000..4305d5882042 --- /dev/null +++ b/packages/google-cloud-videointelligence/docs/videointelligence_v1p3beta1/types.rst @@ -0,0 +1,6 @@ +Types for Google Cloud Videointelligence v1p3beta1 API +====================================================== + +.. automodule:: google.cloud.videointelligence_v1p3beta1.types + :members: + :show-inheritance: diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence/__init__.py new file mode 100644 index 000000000000..5429a65f7046 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence/__init__.py @@ -0,0 +1,170 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from google.cloud.videointelligence_v1.services.video_intelligence_service.async_client import ( + VideoIntelligenceServiceAsyncClient, +) +from google.cloud.videointelligence_v1.services.video_intelligence_service.client import ( + VideoIntelligenceServiceClient, +) +from google.cloud.videointelligence_v1.types.video_intelligence import ( + AnnotateVideoProgress, +) +from google.cloud.videointelligence_v1.types.video_intelligence import ( + AnnotateVideoRequest, +) +from google.cloud.videointelligence_v1.types.video_intelligence import ( + AnnotateVideoResponse, +) +from google.cloud.videointelligence_v1.types.video_intelligence import DetectedAttribute +from google.cloud.videointelligence_v1.types.video_intelligence import DetectedLandmark +from google.cloud.videointelligence_v1.types.video_intelligence import Entity +from google.cloud.videointelligence_v1.types.video_intelligence import ( + ExplicitContentAnnotation, +) +from google.cloud.videointelligence_v1.types.video_intelligence import ( + ExplicitContentDetectionConfig, +) +from google.cloud.videointelligence_v1.types.video_intelligence import ( + ExplicitContentFrame, +) +from google.cloud.videointelligence_v1.types.video_intelligence import FaceAnnotation +from google.cloud.videointelligence_v1.types.video_intelligence import ( + FaceDetectionAnnotation, +) +from google.cloud.videointelligence_v1.types.video_intelligence import ( + FaceDetectionConfig, +) +from google.cloud.videointelligence_v1.types.video_intelligence import FaceFrame +from google.cloud.videointelligence_v1.types.video_intelligence import FaceSegment +from google.cloud.videointelligence_v1.types.video_intelligence import Feature +from google.cloud.videointelligence_v1.types.video_intelligence import LabelAnnotation +from google.cloud.videointelligence_v1.types.video_intelligence import ( + LabelDetectionConfig, +) +from google.cloud.videointelligence_v1.types.video_intelligence import ( + LabelDetectionMode, +) +from google.cloud.videointelligence_v1.types.video_intelligence import LabelFrame +from google.cloud.videointelligence_v1.types.video_intelligence import LabelSegment +from google.cloud.videointelligence_v1.types.video_intelligence import Likelihood +from google.cloud.videointelligence_v1.types.video_intelligence import ( + LogoRecognitionAnnotation, +) +from google.cloud.videointelligence_v1.types.video_intelligence import ( + NormalizedBoundingBox, +) +from google.cloud.videointelligence_v1.types.video_intelligence import ( + NormalizedBoundingPoly, +) +from google.cloud.videointelligence_v1.types.video_intelligence import NormalizedVertex +from google.cloud.videointelligence_v1.types.video_intelligence import ( + ObjectTrackingAnnotation, +) +from google.cloud.videointelligence_v1.types.video_intelligence import ( + ObjectTrackingConfig, +) +from google.cloud.videointelligence_v1.types.video_intelligence import ( + ObjectTrackingFrame, +) +from google.cloud.videointelligence_v1.types.video_intelligence import ( + PersonDetectionAnnotation, +) +from google.cloud.videointelligence_v1.types.video_intelligence import ( + PersonDetectionConfig, +) +from google.cloud.videointelligence_v1.types.video_intelligence import ( + ShotChangeDetectionConfig, +) +from google.cloud.videointelligence_v1.types.video_intelligence import SpeechContext +from google.cloud.videointelligence_v1.types.video_intelligence import ( + SpeechRecognitionAlternative, +) +from google.cloud.videointelligence_v1.types.video_intelligence import ( + SpeechTranscription, +) +from google.cloud.videointelligence_v1.types.video_intelligence import ( + SpeechTranscriptionConfig, +) +from google.cloud.videointelligence_v1.types.video_intelligence import TextAnnotation +from google.cloud.videointelligence_v1.types.video_intelligence import ( + TextDetectionConfig, +) +from google.cloud.videointelligence_v1.types.video_intelligence import TextFrame +from google.cloud.videointelligence_v1.types.video_intelligence import TextSegment +from google.cloud.videointelligence_v1.types.video_intelligence import TimestampedObject +from google.cloud.videointelligence_v1.types.video_intelligence import Track +from google.cloud.videointelligence_v1.types.video_intelligence import ( + VideoAnnotationProgress, +) +from google.cloud.videointelligence_v1.types.video_intelligence import ( + VideoAnnotationResults, +) +from google.cloud.videointelligence_v1.types.video_intelligence import VideoContext +from google.cloud.videointelligence_v1.types.video_intelligence import VideoSegment +from google.cloud.videointelligence_v1.types.video_intelligence import WordInfo + +__all__ = ( + "AnnotateVideoProgress", + "AnnotateVideoRequest", + "AnnotateVideoResponse", + "DetectedAttribute", + "DetectedLandmark", + "Entity", + "ExplicitContentAnnotation", + "ExplicitContentDetectionConfig", + "ExplicitContentFrame", + "FaceAnnotation", + "FaceDetectionAnnotation", + "FaceDetectionConfig", + "FaceFrame", + "FaceSegment", + "Feature", + "LabelAnnotation", + "LabelDetectionConfig", + "LabelDetectionMode", + "LabelFrame", + "LabelSegment", + "Likelihood", + "LogoRecognitionAnnotation", + "NormalizedBoundingBox", + "NormalizedBoundingPoly", + "NormalizedVertex", + "ObjectTrackingAnnotation", + "ObjectTrackingConfig", + "ObjectTrackingFrame", + "PersonDetectionAnnotation", + "PersonDetectionConfig", + "ShotChangeDetectionConfig", + "SpeechContext", + "SpeechRecognitionAlternative", + "SpeechTranscription", + "SpeechTranscriptionConfig", + "TextAnnotation", + "TextDetectionConfig", + "TextFrame", + "TextSegment", + "TimestampedObject", + "Track", + "VideoAnnotationProgress", + "VideoAnnotationResults", + "VideoContext", + "VideoIntelligenceServiceAsyncClient", + "VideoIntelligenceServiceClient", + "VideoSegment", + "WordInfo", +) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence/py.typed b/packages/google-cloud-videointelligence/google/cloud/videointelligence/py.typed new file mode 100644 index 000000000000..e7fb166bf3e5 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence/py.typed @@ -0,0 +1,2 @@ +# Marker file for PEP 561. +# The google-cloud-videointelligence package uses inline types. diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/__init__.py index e786bed2ef87..4d4929e10480 100644 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/__init__.py +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/__init__.py @@ -1,47 +1,115 @@ # -*- coding: utf-8 -*- -# + # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # -# https://www.apache.org/licenses/LICENSE-2.0 +# http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. +# - -from __future__ import absolute_import -import sys -import warnings - -from google.cloud.videointelligence_v1 import types -from google.cloud.videointelligence_v1.gapic import enums -from google.cloud.videointelligence_v1.gapic import video_intelligence_service_client - - -if sys.version_info[:2] == (2, 7): - message = ( - "A future version of this library will drop support for Python 2.7. " - "More details about Python 2 support for Google Cloud Client Libraries " - "can be found at https://cloud.google.com/python/docs/python2-sunset/" - ) - warnings.warn(message, DeprecationWarning) - - -class VideoIntelligenceServiceClient( - video_intelligence_service_client.VideoIntelligenceServiceClient -): - __doc__ = video_intelligence_service_client.VideoIntelligenceServiceClient.__doc__ - enums = enums +from .services.video_intelligence_service import VideoIntelligenceServiceClient +from .types.video_intelligence import AnnotateVideoProgress +from .types.video_intelligence import AnnotateVideoRequest +from .types.video_intelligence import AnnotateVideoResponse +from .types.video_intelligence import DetectedAttribute +from .types.video_intelligence import DetectedLandmark +from .types.video_intelligence import Entity +from .types.video_intelligence import ExplicitContentAnnotation +from .types.video_intelligence import ExplicitContentDetectionConfig +from .types.video_intelligence import ExplicitContentFrame +from .types.video_intelligence import FaceAnnotation +from .types.video_intelligence import FaceDetectionAnnotation +from .types.video_intelligence import FaceDetectionConfig +from .types.video_intelligence import FaceFrame +from .types.video_intelligence import FaceSegment +from .types.video_intelligence import Feature +from .types.video_intelligence import LabelAnnotation +from .types.video_intelligence import LabelDetectionConfig +from .types.video_intelligence import LabelDetectionMode +from .types.video_intelligence import LabelFrame +from .types.video_intelligence import LabelSegment +from .types.video_intelligence import Likelihood +from .types.video_intelligence import LogoRecognitionAnnotation +from .types.video_intelligence import NormalizedBoundingBox +from .types.video_intelligence import NormalizedBoundingPoly +from .types.video_intelligence import NormalizedVertex +from .types.video_intelligence import ObjectTrackingAnnotation +from .types.video_intelligence import ObjectTrackingConfig +from .types.video_intelligence import ObjectTrackingFrame +from .types.video_intelligence import PersonDetectionAnnotation +from .types.video_intelligence import PersonDetectionConfig +from .types.video_intelligence import ShotChangeDetectionConfig +from .types.video_intelligence import SpeechContext +from .types.video_intelligence import SpeechRecognitionAlternative +from .types.video_intelligence import SpeechTranscription +from .types.video_intelligence import SpeechTranscriptionConfig +from .types.video_intelligence import TextAnnotation +from .types.video_intelligence import TextDetectionConfig +from .types.video_intelligence import TextFrame +from .types.video_intelligence import TextSegment +from .types.video_intelligence import TimestampedObject +from .types.video_intelligence import Track +from .types.video_intelligence import VideoAnnotationProgress +from .types.video_intelligence import VideoAnnotationResults +from .types.video_intelligence import VideoContext +from .types.video_intelligence import VideoSegment +from .types.video_intelligence import WordInfo __all__ = ( - "enums", - "types", + "AnnotateVideoProgress", + "AnnotateVideoRequest", + "AnnotateVideoResponse", + "DetectedAttribute", + "DetectedLandmark", + "Entity", + "ExplicitContentAnnotation", + "ExplicitContentDetectionConfig", + "ExplicitContentFrame", + "FaceAnnotation", + "FaceDetectionAnnotation", + "FaceDetectionConfig", + "FaceFrame", + "FaceSegment", + "Feature", + "LabelAnnotation", + "LabelDetectionConfig", + "LabelDetectionMode", + "LabelFrame", + "LabelSegment", + "Likelihood", + "LogoRecognitionAnnotation", + "NormalizedBoundingBox", + "NormalizedBoundingPoly", + "NormalizedVertex", + "ObjectTrackingAnnotation", + "ObjectTrackingConfig", + "ObjectTrackingFrame", + "PersonDetectionAnnotation", + "PersonDetectionConfig", + "ShotChangeDetectionConfig", + "SpeechContext", + "SpeechRecognitionAlternative", + "SpeechTranscription", + "SpeechTranscriptionConfig", + "TextAnnotation", + "TextDetectionConfig", + "TextFrame", + "TextSegment", + "TimestampedObject", + "Track", + "VideoAnnotationProgress", + "VideoAnnotationResults", + "VideoContext", + "VideoSegment", + "WordInfo", "VideoIntelligenceServiceClient", ) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/gapic/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/gapic/__init__.py deleted file mode 100644 index e69de29bb2d1..000000000000 diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/gapic/enums.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/gapic/enums.py deleted file mode 100644 index 3115c183b27f..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/gapic/enums.py +++ /dev/null @@ -1,86 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Wrappers for protocol buffer enum types.""" - -import enum - - -class Feature(enum.IntEnum): - """ - Video annotation feature. - - Attributes: - FEATURE_UNSPECIFIED (int): Unspecified. - LABEL_DETECTION (int): Label detection. Detect objects, such as dog or flower. - SHOT_CHANGE_DETECTION (int): Shot change detection. - EXPLICIT_CONTENT_DETECTION (int): Explicit content detection. - FACE_DETECTION (int): Human face detection. - SPEECH_TRANSCRIPTION (int): Speech transcription. - TEXT_DETECTION (int): OCR text detection and tracking. - OBJECT_TRACKING (int): Object detection and tracking. - LOGO_RECOGNITION (int): Logo detection, tracking, and recognition. - PERSON_DETECTION (int): Person detection. - """ - - FEATURE_UNSPECIFIED = 0 - LABEL_DETECTION = 1 - SHOT_CHANGE_DETECTION = 2 - EXPLICIT_CONTENT_DETECTION = 3 - FACE_DETECTION = 4 - SPEECH_TRANSCRIPTION = 6 - TEXT_DETECTION = 7 - OBJECT_TRACKING = 9 - LOGO_RECOGNITION = 12 - PERSON_DETECTION = 14 - - -class LabelDetectionMode(enum.IntEnum): - """ - Label detection mode. - - Attributes: - LABEL_DETECTION_MODE_UNSPECIFIED (int): Unspecified. - SHOT_MODE (int): Detect shot-level labels. - FRAME_MODE (int): Detect frame-level labels. - SHOT_AND_FRAME_MODE (int): Detect both shot-level and frame-level labels. - """ - - LABEL_DETECTION_MODE_UNSPECIFIED = 0 - SHOT_MODE = 1 - FRAME_MODE = 2 - SHOT_AND_FRAME_MODE = 3 - - -class Likelihood(enum.IntEnum): - """ - Bucketized representation of likelihood. - - Attributes: - LIKELIHOOD_UNSPECIFIED (int): Unspecified likelihood. - VERY_UNLIKELY (int): Very unlikely. - UNLIKELY (int): Unlikely. - POSSIBLE (int): Possible. - LIKELY (int): Likely. - VERY_LIKELY (int): Very likely. - """ - - LIKELIHOOD_UNSPECIFIED = 0 - VERY_UNLIKELY = 1 - UNLIKELY = 2 - POSSIBLE = 3 - LIKELY = 4 - VERY_LIKELY = 5 diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/gapic/transports/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/gapic/transports/__init__.py deleted file mode 100644 index e69de29bb2d1..000000000000 diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/gapic/transports/video_intelligence_service_grpc_transport.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/gapic/transports/video_intelligence_service_grpc_transport.py deleted file mode 100644 index 710074374fd3..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/gapic/transports/video_intelligence_service_grpc_transport.py +++ /dev/null @@ -1,137 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import google.api_core.grpc_helpers -import google.api_core.operations_v1 - -from google.cloud.videointelligence_v1.proto import video_intelligence_pb2_grpc - - -class VideoIntelligenceServiceGrpcTransport(object): - """gRPC transport class providing stubs for - google.cloud.videointelligence.v1 VideoIntelligenceService API. - - The transport provides access to the raw gRPC stubs, - which can be used to take advantage of advanced - features of gRPC. - """ - - # The scopes needed to make gRPC calls to all of the methods defined - # in this service. - _OAUTH_SCOPES = ("https://www.googleapis.com/auth/cloud-platform",) - - def __init__( - self, - channel=None, - credentials=None, - address="videointelligence.googleapis.com:443", - ): - """Instantiate the transport class. - - Args: - channel (grpc.Channel): A ``Channel`` instance through - which to make calls. This argument is mutually exclusive - with ``credentials``; providing both will raise an exception. - credentials (google.auth.credentials.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If none - are specified, the client will attempt to ascertain the - credentials from the environment. - address (str): The address where the service is hosted. - """ - # If both `channel` and `credentials` are specified, raise an - # exception (channels come with credentials baked in already). - if channel is not None and credentials is not None: - raise ValueError( - "The `channel` and `credentials` arguments are mutually " "exclusive.", - ) - - # Create the channel. - if channel is None: - channel = self.create_channel( - address=address, - credentials=credentials, - options={ - "grpc.max_send_message_length": -1, - "grpc.max_receive_message_length": -1, - }.items(), - ) - - self._channel = channel - - # gRPC uses objects called "stubs" that are bound to the - # channel and provide a basic method for each RPC. - self._stubs = { - "video_intelligence_service_stub": video_intelligence_pb2_grpc.VideoIntelligenceServiceStub( - channel - ), - } - - # Because this API includes a method that returns a - # long-running operation (proto: google.longrunning.Operation), - # instantiate an LRO client. - self._operations_client = google.api_core.operations_v1.OperationsClient( - channel - ) - - @classmethod - def create_channel( - cls, address="videointelligence.googleapis.com:443", credentials=None, **kwargs - ): - """Create and return a gRPC channel object. - - Args: - address (str): The host for the channel to use. - credentials (~.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If - none are specified, the client will attempt to ascertain - the credentials from the environment. - kwargs (dict): Keyword arguments, which are passed to the - channel creation. - - Returns: - grpc.Channel: A gRPC channel object. - """ - return google.api_core.grpc_helpers.create_channel( - address, credentials=credentials, scopes=cls._OAUTH_SCOPES, **kwargs - ) - - @property - def channel(self): - """The gRPC channel used by the transport. - - Returns: - grpc.Channel: A gRPC channel object. - """ - return self._channel - - @property - def annotate_video(self): - """Return the gRPC stub for :meth:`VideoIntelligenceServiceClient.annotate_video`. - - Performs asynchronous video annotation. Progress and results can be - retrieved through the ``google.longrunning.Operations`` interface. - ``Operation.metadata`` contains ``AnnotateVideoProgress`` (progress). - ``Operation.response`` contains ``AnnotateVideoResponse`` (results). - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["video_intelligence_service_stub"].AnnotateVideo diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/gapic/video_intelligence_service_client.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/gapic/video_intelligence_service_client.py deleted file mode 100644 index d62f41f8c163..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/gapic/video_intelligence_service_client.py +++ /dev/null @@ -1,310 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Accesses the google.cloud.videointelligence.v1 VideoIntelligenceService API.""" - -import pkg_resources -import warnings - -from google.oauth2 import service_account -import google.api_core.client_options -import google.api_core.gapic_v1.client_info -import google.api_core.gapic_v1.config -import google.api_core.gapic_v1.method -import google.api_core.grpc_helpers -import google.api_core.operation -import google.api_core.operations_v1 -import grpc - -from google.cloud.videointelligence_v1.gapic import enums -from google.cloud.videointelligence_v1.gapic import ( - video_intelligence_service_client_config, -) -from google.cloud.videointelligence_v1.gapic.transports import ( - video_intelligence_service_grpc_transport, -) -from google.cloud.videointelligence_v1.proto import video_intelligence_pb2 -from google.cloud.videointelligence_v1.proto import video_intelligence_pb2_grpc -from google.longrunning import operations_pb2 - - -_GAPIC_LIBRARY_VERSION = pkg_resources.get_distribution( - "google-cloud-videointelligence", -).version - - -class VideoIntelligenceServiceClient(object): - """Service that implements the Video Intelligence API.""" - - SERVICE_ADDRESS = "videointelligence.googleapis.com:443" - """The default address of the service.""" - - # The name of the interface for this client. This is the key used to - # find the method configuration in the client_config dictionary. - _INTERFACE_NAME = "google.cloud.videointelligence.v1.VideoIntelligenceService" - - @classmethod - def from_service_account_file(cls, filename, *args, **kwargs): - """Creates an instance of this client using the provided credentials - file. - - Args: - filename (str): The path to the service account private key json - file. - args: Additional arguments to pass to the constructor. - kwargs: Additional arguments to pass to the constructor. - - Returns: - VideoIntelligenceServiceClient: The constructed client. - """ - credentials = service_account.Credentials.from_service_account_file(filename) - kwargs["credentials"] = credentials - return cls(*args, **kwargs) - - from_service_account_json = from_service_account_file - - def __init__( - self, - transport=None, - channel=None, - credentials=None, - client_config=None, - client_info=None, - client_options=None, - ): - """Constructor. - - Args: - transport (Union[~.VideoIntelligenceServiceGrpcTransport, - Callable[[~.Credentials, type], ~.VideoIntelligenceServiceGrpcTransport]): A transport - instance, responsible for actually making the API calls. - The default transport uses the gRPC protocol. - This argument may also be a callable which returns a - transport instance. Callables will be sent the credentials - as the first argument and the default transport class as - the second argument. - channel (grpc.Channel): DEPRECATED. A ``Channel`` instance - through which to make calls. This argument is mutually exclusive - with ``credentials``; providing both will raise an exception. - credentials (google.auth.credentials.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If none - are specified, the client will attempt to ascertain the - credentials from the environment. - This argument is mutually exclusive with providing a - transport instance to ``transport``; doing so will raise - an exception. - client_config (dict): DEPRECATED. A dictionary of call options for - each method. If not specified, the default configuration is used. - client_info (google.api_core.gapic_v1.client_info.ClientInfo): - The client info used to send a user-agent string along with - API requests. If ``None``, then default info will be used. - Generally, you only need to set this if you're developing - your own client library. - client_options (Union[dict, google.api_core.client_options.ClientOptions]): - Client options used to set user options on the client. API Endpoint - should be set through client_options. - """ - # Raise deprecation warnings for things we want to go away. - if client_config is not None: - warnings.warn( - "The `client_config` argument is deprecated.", - PendingDeprecationWarning, - stacklevel=2, - ) - else: - client_config = video_intelligence_service_client_config.config - - if channel: - warnings.warn( - "The `channel` argument is deprecated; use " "`transport` instead.", - PendingDeprecationWarning, - stacklevel=2, - ) - - api_endpoint = self.SERVICE_ADDRESS - if client_options: - if type(client_options) == dict: - client_options = google.api_core.client_options.from_dict( - client_options - ) - if client_options.api_endpoint: - api_endpoint = client_options.api_endpoint - - # Instantiate the transport. - # The transport is responsible for handling serialization and - # deserialization and actually sending data to the service. - if transport: - if callable(transport): - self.transport = transport( - credentials=credentials, - default_class=video_intelligence_service_grpc_transport.VideoIntelligenceServiceGrpcTransport, - address=api_endpoint, - ) - else: - if credentials: - raise ValueError( - "Received both a transport instance and " - "credentials; these are mutually exclusive." - ) - self.transport = transport - else: - self.transport = video_intelligence_service_grpc_transport.VideoIntelligenceServiceGrpcTransport( - address=api_endpoint, channel=channel, credentials=credentials, - ) - - if client_info is None: - client_info = google.api_core.gapic_v1.client_info.ClientInfo( - gapic_version=_GAPIC_LIBRARY_VERSION, - ) - else: - client_info.gapic_version = _GAPIC_LIBRARY_VERSION - self._client_info = client_info - - # Parse out the default settings for retry and timeout for each RPC - # from the client configuration. - # (Ordinarily, these are the defaults specified in the `*_config.py` - # file next to this one.) - self._method_configs = google.api_core.gapic_v1.config.parse_method_configs( - client_config["interfaces"][self._INTERFACE_NAME], - ) - - # Save a dictionary of cached API call functions. - # These are the actual callables which invoke the proper - # transport methods, wrapped with `wrap_method` to add retry, - # timeout, and the like. - self._inner_api_calls = {} - - # Service calls - def annotate_video( - self, - input_uri=None, - input_content=None, - # NOTE: Keep features a keyword param that comes after `input_uri` until - # the microgenerator migration to avoid breaking users. - # See https://github.com/googleapis/python-videointelligence/issues/7. - features=None, - video_context=None, - output_uri=None, - location_id=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Performs asynchronous video annotation. Progress and results can be - retrieved through the ``google.longrunning.Operations`` interface. - ``Operation.metadata`` contains ``AnnotateVideoProgress`` (progress). - ``Operation.response`` contains ``AnnotateVideoResponse`` (results). - - Example: - >>> from google.cloud import videointelligence_v1 - >>> from google.cloud.videointelligence_v1 import enums - >>> - >>> client = videointelligence_v1.VideoIntelligenceServiceClient() - >>> - >>> features_element = enums.Feature.LABEL_DETECTION - >>> features = [features_element] - >>> input_uri = 'gs://cloud-samples-data/video/cat.mp4' - >>> - >>> response = client.annotate_video(input_uri=input_uri, features=features) - >>> - >>> def callback(operation_future): - ... # Handle result. - ... result = operation_future.result() - >>> - >>> response.add_done_callback(callback) - >>> - >>> # Handle metadata. - >>> metadata = response.metadata() - - Args: - features (list[~google.cloud.videointelligence_v1.types.Feature]): Required. Requested video annotation features. - input_uri (str): Input video location. Currently, only `Cloud - Storage `__ URIs are supported. URIs - must be specified in the following format: ``gs://bucket-id/object-id`` - (other URI formats return ``google.rpc.Code.INVALID_ARGUMENT``). For - more information, see `Request - URIs `__. To - identify multiple videos, a video URI may include wildcards in the - ``object-id``. Supported wildcards: '*' to match 0 or more characters; - '?' to match 1 character. If unset, the input video should be embedded - in the request as ``input_content``. If set, ``input_content`` must be - unset. - input_content (bytes): The video data bytes. If unset, the input video(s) should be - specified via the ``input_uri``. If set, ``input_uri`` must be unset. - video_context (Union[dict, ~google.cloud.videointelligence_v1.types.VideoContext]): Additional video context and/or feature-specific parameters. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.videointelligence_v1.types.VideoContext` - output_uri (str): Optional. Location where the output (in JSON format) should be - stored. Currently, only `Cloud - Storage `__ URIs are supported. These - must be specified in the following format: ``gs://bucket-id/object-id`` - (other URI formats return ``google.rpc.Code.INVALID_ARGUMENT``). For - more information, see `Request - URIs `__. - location_id (str): Optional. Cloud region where annotation should take place. Supported - cloud regions are: ``us-east1``, ``us-west1``, ``europe-west1``, - ``asia-east1``. If no region is specified, the region will be determined - based on video file location. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.videointelligence_v1.types._OperationFuture` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "annotate_video" not in self._inner_api_calls: - self._inner_api_calls[ - "annotate_video" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.annotate_video, - default_retry=self._method_configs["AnnotateVideo"].retry, - default_timeout=self._method_configs["AnnotateVideo"].timeout, - client_info=self._client_info, - ) - - request = video_intelligence_pb2.AnnotateVideoRequest( - features=features, - input_uri=input_uri, - input_content=input_content, - video_context=video_context, - output_uri=output_uri, - location_id=location_id, - ) - operation = self._inner_api_calls["annotate_video"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - return google.api_core.operation.from_gapic( - operation, - self.transport._operations_client, - video_intelligence_pb2.AnnotateVideoResponse, - metadata_type=video_intelligence_pb2.AnnotateVideoProgress, - ) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/gapic/video_intelligence_service_client_config.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/gapic/video_intelligence_service_client_config.py deleted file mode 100644 index bcc8d4f393cf..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/gapic/video_intelligence_service_client_config.py +++ /dev/null @@ -1,37 +0,0 @@ -config = { - "interfaces": { - "google.cloud.videointelligence.v1.VideoIntelligenceService": { - "retry_codes": { - "retry_policy_1_codes": ["UNAVAILABLE", "DEADLINE_EXCEEDED"], - "no_retry_codes": [], - }, - "retry_params": { - "retry_policy_1_params": { - "initial_retry_delay_millis": 1000, - "retry_delay_multiplier": 2.5, - "max_retry_delay_millis": 120000, - "initial_rpc_timeout_millis": 600000, - "rpc_timeout_multiplier": 1.0, - "max_rpc_timeout_millis": 600000, - "total_timeout_millis": 600000, - }, - "no_retry_params": { - "initial_retry_delay_millis": 0, - "retry_delay_multiplier": 0.0, - "max_retry_delay_millis": 0, - "initial_rpc_timeout_millis": 0, - "rpc_timeout_multiplier": 1.0, - "max_rpc_timeout_millis": 0, - "total_timeout_millis": 0, - }, - }, - "methods": { - "AnnotateVideo": { - "timeout_millis": 600000, - "retry_codes_name": "retry_policy_1_codes", - "retry_params_name": "retry_policy_1_params", - } - }, - } - } -} diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/proto/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/proto/__init__.py deleted file mode 100644 index e69de29bb2d1..000000000000 diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/proto/video_intelligence.proto b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/proto/video_intelligence.proto deleted file mode 100644 index 5b3a3e343c26..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/proto/video_intelligence.proto +++ /dev/null @@ -1,900 +0,0 @@ -// Copyright 2020 Google LLC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -syntax = "proto3"; - -package google.cloud.videointelligence.v1; - -import "google/api/annotations.proto"; -import "google/api/client.proto"; -import "google/api/field_behavior.proto"; -import "google/longrunning/operations.proto"; -import "google/protobuf/duration.proto"; -import "google/protobuf/timestamp.proto"; -import "google/rpc/status.proto"; - -option csharp_namespace = "Google.Cloud.VideoIntelligence.V1"; -option go_package = "google.golang.org/genproto/googleapis/cloud/videointelligence/v1;videointelligence"; -option java_multiple_files = true; -option java_outer_classname = "VideoIntelligenceServiceProto"; -option java_package = "com.google.cloud.videointelligence.v1"; -option php_namespace = "Google\\Cloud\\VideoIntelligence\\V1"; -option ruby_package = "Google::Cloud::VideoIntelligence::V1"; - -// Service that implements the Video Intelligence API. -service VideoIntelligenceService { - option (google.api.default_host) = "videointelligence.googleapis.com"; - option (google.api.oauth_scopes) = - "https://www.googleapis.com/auth/cloud-platform"; - - // Performs asynchronous video annotation. Progress and results can be - // retrieved through the `google.longrunning.Operations` interface. - // `Operation.metadata` contains `AnnotateVideoProgress` (progress). - // `Operation.response` contains `AnnotateVideoResponse` (results). - rpc AnnotateVideo(AnnotateVideoRequest) - returns (google.longrunning.Operation) { - option (google.api.http) = { - post: "/v1/videos:annotate" - body: "*" - }; - option (google.api.method_signature) = "input_uri,features"; - option (google.longrunning.operation_info) = { - response_type: "AnnotateVideoResponse" - metadata_type: "AnnotateVideoProgress" - }; - } -} - -// Video annotation request. -message AnnotateVideoRequest { - // Input video location. Currently, only - // [Cloud Storage](https://cloud.google.com/storage/) URIs are - // supported. URIs must be specified in the following format: - // `gs://bucket-id/object-id` (other URI formats return - // [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For - // more information, see [Request - // URIs](https://cloud.google.com/storage/docs/request-endpoints). To identify - // multiple videos, a video URI may include wildcards in the `object-id`. - // Supported wildcards: '*' to match 0 or more characters; - // '?' to match 1 character. If unset, the input video should be embedded - // in the request as `input_content`. If set, `input_content` must be unset. - string input_uri = 1; - - // The video data bytes. - // If unset, the input video(s) should be specified via the `input_uri`. - // If set, `input_uri` must be unset. - bytes input_content = 6; - - // Required. Requested video annotation features. - repeated Feature features = 2 [(google.api.field_behavior) = REQUIRED]; - - // Additional video context and/or feature-specific parameters. - VideoContext video_context = 3; - - // Optional. Location where the output (in JSON format) should be stored. - // Currently, only [Cloud Storage](https://cloud.google.com/storage/) - // URIs are supported. These must be specified in the following format: - // `gs://bucket-id/object-id` (other URI formats return - // [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For - // more information, see [Request - // URIs](https://cloud.google.com/storage/docs/request-endpoints). - string output_uri = 4 [(google.api.field_behavior) = OPTIONAL]; - - // Optional. Cloud region where annotation should take place. Supported cloud - // regions are: `us-east1`, `us-west1`, `europe-west1`, `asia-east1`. If no - // region is specified, the region will be determined based on video file - // location. - string location_id = 5 [(google.api.field_behavior) = OPTIONAL]; -} - -// Video context and/or feature-specific parameters. -message VideoContext { - // Video segments to annotate. The segments may overlap and are not required - // to be contiguous or span the whole video. If unspecified, each video is - // treated as a single segment. - repeated VideoSegment segments = 1; - - // Config for LABEL_DETECTION. - LabelDetectionConfig label_detection_config = 2; - - // Config for SHOT_CHANGE_DETECTION. - ShotChangeDetectionConfig shot_change_detection_config = 3; - - // Config for EXPLICIT_CONTENT_DETECTION. - ExplicitContentDetectionConfig explicit_content_detection_config = 4; - - // Config for FACE_DETECTION. - FaceDetectionConfig face_detection_config = 5; - - // Config for SPEECH_TRANSCRIPTION. - SpeechTranscriptionConfig speech_transcription_config = 6; - - // Config for TEXT_DETECTION. - TextDetectionConfig text_detection_config = 8; - - // Config for PERSON_DETECTION. - PersonDetectionConfig person_detection_config = 11; - - // Config for OBJECT_TRACKING. - ObjectTrackingConfig object_tracking_config = 13; -} - -// Video annotation feature. -enum Feature { - // Unspecified. - FEATURE_UNSPECIFIED = 0; - - // Label detection. Detect objects, such as dog or flower. - LABEL_DETECTION = 1; - - // Shot change detection. - SHOT_CHANGE_DETECTION = 2; - - // Explicit content detection. - EXPLICIT_CONTENT_DETECTION = 3; - - // Human face detection. - FACE_DETECTION = 4; - - // Speech transcription. - SPEECH_TRANSCRIPTION = 6; - - // OCR text detection and tracking. - TEXT_DETECTION = 7; - - // Object detection and tracking. - OBJECT_TRACKING = 9; - - // Logo detection, tracking, and recognition. - LOGO_RECOGNITION = 12; - - // Person detection. - PERSON_DETECTION = 14; -} - -// Label detection mode. -enum LabelDetectionMode { - // Unspecified. - LABEL_DETECTION_MODE_UNSPECIFIED = 0; - - // Detect shot-level labels. - SHOT_MODE = 1; - - // Detect frame-level labels. - FRAME_MODE = 2; - - // Detect both shot-level and frame-level labels. - SHOT_AND_FRAME_MODE = 3; -} - -// Bucketized representation of likelihood. -enum Likelihood { - // Unspecified likelihood. - LIKELIHOOD_UNSPECIFIED = 0; - - // Very unlikely. - VERY_UNLIKELY = 1; - - // Unlikely. - UNLIKELY = 2; - - // Possible. - POSSIBLE = 3; - - // Likely. - LIKELY = 4; - - // Very likely. - VERY_LIKELY = 5; -} - -// Config for LABEL_DETECTION. -message LabelDetectionConfig { - // What labels should be detected with LABEL_DETECTION, in addition to - // video-level labels or segment-level labels. - // If unspecified, defaults to `SHOT_MODE`. - LabelDetectionMode label_detection_mode = 1; - - // Whether the video has been shot from a stationary (i.e., non-moving) - // camera. When set to true, might improve detection accuracy for moving - // objects. Should be used with `SHOT_AND_FRAME_MODE` enabled. - bool stationary_camera = 2; - - // Model to use for label detection. - // Supported values: "builtin/stable" (the default if unset) and - // "builtin/latest". - string model = 3; - - // The confidence threshold we perform filtering on the labels from - // frame-level detection. If not set, it is set to 0.4 by default. The valid - // range for this threshold is [0.1, 0.9]. Any value set outside of this - // range will be clipped. - // Note: For best results, follow the default threshold. We will update - // the default threshold everytime when we release a new model. - float frame_confidence_threshold = 4; - - // The confidence threshold we perform filtering on the labels from - // video-level and shot-level detections. If not set, it's set to 0.3 by - // default. The valid range for this threshold is [0.1, 0.9]. Any value set - // outside of this range will be clipped. - // Note: For best results, follow the default threshold. We will update - // the default threshold everytime when we release a new model. - float video_confidence_threshold = 5; -} - -// Config for SHOT_CHANGE_DETECTION. -message ShotChangeDetectionConfig { - // Model to use for shot change detection. - // Supported values: "builtin/stable" (the default if unset) and - // "builtin/latest". - string model = 1; -} - -// Config for OBJECT_TRACKING. -message ObjectTrackingConfig { - // Model to use for object tracking. - // Supported values: "builtin/stable" (the default if unset) and - // "builtin/latest". - string model = 1; -} - -// Config for FACE_DETECTION. -message FaceDetectionConfig { - // Model to use for face detection. - // Supported values: "builtin/stable" (the default if unset) and - // "builtin/latest". - string model = 1; - - // Whether bounding boxes are included in the face annotation output. - bool include_bounding_boxes = 2; - - // Whether to enable face attributes detection, such as glasses, dark_glasses, - // mouth_open etc. Ignored if 'include_bounding_boxes' is set to false. - bool include_attributes = 5; -} - -// Config for PERSON_DETECTION. -message PersonDetectionConfig { - // Whether bounding boxes are included in the person detection annotation - // output. - bool include_bounding_boxes = 1; - - // Whether to enable pose landmarks detection. Ignored if - // 'include_bounding_boxes' is set to false. - bool include_pose_landmarks = 2; - - // Whether to enable person attributes detection, such as cloth color (black, - // blue, etc), type (coat, dress, etc), pattern (plain, floral, etc), hair, - // etc. - // Ignored if 'include_bounding_boxes' is set to false. - bool include_attributes = 3; -} - -// Config for EXPLICIT_CONTENT_DETECTION. -message ExplicitContentDetectionConfig { - // Model to use for explicit content detection. - // Supported values: "builtin/stable" (the default if unset) and - // "builtin/latest". - string model = 1; -} - -// Config for TEXT_DETECTION. -message TextDetectionConfig { - // Language hint can be specified if the language to be detected is known a - // priori. It can increase the accuracy of the detection. Language hint must - // be language code in BCP-47 format. - // - // Automatic language detection is performed if no hint is provided. - repeated string language_hints = 1; - - // Model to use for text detection. - // Supported values: "builtin/stable" (the default if unset) and - // "builtin/latest". - string model = 2; -} - -// Video segment. -message VideoSegment { - // Time-offset, relative to the beginning of the video, - // corresponding to the start of the segment (inclusive). - google.protobuf.Duration start_time_offset = 1; - - // Time-offset, relative to the beginning of the video, - // corresponding to the end of the segment (inclusive). - google.protobuf.Duration end_time_offset = 2; -} - -// Video segment level annotation results for label detection. -message LabelSegment { - // Video segment where a label was detected. - VideoSegment segment = 1; - - // Confidence that the label is accurate. Range: [0, 1]. - float confidence = 2; -} - -// Video frame level annotation results for label detection. -message LabelFrame { - // Time-offset, relative to the beginning of the video, corresponding to the - // video frame for this location. - google.protobuf.Duration time_offset = 1; - - // Confidence that the label is accurate. Range: [0, 1]. - float confidence = 2; -} - -// Detected entity from video analysis. -message Entity { - // Opaque entity ID. Some IDs may be available in - // [Google Knowledge Graph Search - // API](https://developers.google.com/knowledge-graph/). - string entity_id = 1; - - // Textual description, e.g., `Fixed-gear bicycle`. - string description = 2; - - // Language code for `description` in BCP-47 format. - string language_code = 3; -} - -// Label annotation. -message LabelAnnotation { - // Detected entity. - Entity entity = 1; - - // Common categories for the detected entity. - // For example, when the label is `Terrier`, the category is likely `dog`. And - // in some cases there might be more than one categories e.g., `Terrier` could - // also be a `pet`. - repeated Entity category_entities = 2; - - // All video segments where a label was detected. - repeated LabelSegment segments = 3; - - // All video frames where a label was detected. - repeated LabelFrame frames = 4; - - // Feature version. - string version = 5; -} - -// Video frame level annotation results for explicit content. -message ExplicitContentFrame { - // Time-offset, relative to the beginning of the video, corresponding to the - // video frame for this location. - google.protobuf.Duration time_offset = 1; - - // Likelihood of the pornography content.. - Likelihood pornography_likelihood = 2; -} - -// Explicit content annotation (based on per-frame visual signals only). -// If no explicit content has been detected in a frame, no annotations are -// present for that frame. -message ExplicitContentAnnotation { - // All video frames where explicit content was detected. - repeated ExplicitContentFrame frames = 1; - - // Feature version. - string version = 2; -} - -// Normalized bounding box. -// The normalized vertex coordinates are relative to the original image. -// Range: [0, 1]. -message NormalizedBoundingBox { - // Left X coordinate. - float left = 1; - - // Top Y coordinate. - float top = 2; - - // Right X coordinate. - float right = 3; - - // Bottom Y coordinate. - float bottom = 4; -} - -// Face detection annotation. -message FaceDetectionAnnotation { - // Feature version. - string version = 5; -} - -// Person detection annotation per video. -message PersonDetectionAnnotation { - // The detected tracks of a person. - repeated Track tracks = 1; - - // Feature version. - string version = 2; -} - -// Video segment level annotation results for face detection. -message FaceSegment { - // Video segment where a face was detected. - VideoSegment segment = 1; -} - -// Deprecated. No effect. -message FaceFrame { - option deprecated = true; - - // Normalized Bounding boxes in a frame. - // There can be more than one boxes if the same face is detected in multiple - // locations within the current frame. - repeated NormalizedBoundingBox normalized_bounding_boxes = 1; - - // Time-offset, relative to the beginning of the video, - // corresponding to the video frame for this location. - google.protobuf.Duration time_offset = 2; -} - -// Deprecated. No effect. -message FaceAnnotation { - option deprecated = true; - - // Thumbnail of a representative face view (in JPEG format). - bytes thumbnail = 1; - - // All video segments where a face was detected. - repeated FaceSegment segments = 2; - - // All video frames where a face was detected. - repeated FaceFrame frames = 3; -} - -// For tracking related features. -// An object at time_offset with attributes, and located with -// normalized_bounding_box. -message TimestampedObject { - // Normalized Bounding box in a frame, where the object is located. - NormalizedBoundingBox normalized_bounding_box = 1; - - // Time-offset, relative to the beginning of the video, - // corresponding to the video frame for this object. - google.protobuf.Duration time_offset = 2; - - // Optional. The attributes of the object in the bounding box. - repeated DetectedAttribute attributes = 3 - [(google.api.field_behavior) = OPTIONAL]; - - // Optional. The detected landmarks. - repeated DetectedLandmark landmarks = 4 - [(google.api.field_behavior) = OPTIONAL]; -} - -// A track of an object instance. -message Track { - // Video segment of a track. - VideoSegment segment = 1; - - // The object with timestamp and attributes per frame in the track. - repeated TimestampedObject timestamped_objects = 2; - - // Optional. Attributes in the track level. - repeated DetectedAttribute attributes = 3 - [(google.api.field_behavior) = OPTIONAL]; - - // Optional. The confidence score of the tracked object. - float confidence = 4 [(google.api.field_behavior) = OPTIONAL]; -} - -// A generic detected attribute represented by name in string format. -message DetectedAttribute { - // The name of the attribute, for example, glasses, dark_glasses, mouth_open. - // A full list of supported type names will be provided in the document. - string name = 1; - - // Detected attribute confidence. Range [0, 1]. - float confidence = 2; - - // Text value of the detection result. For example, the value for "HairColor" - // can be "black", "blonde", etc. - string value = 3; -} - -// A generic detected landmark represented by name in string format and a 2D -// location. -message DetectedLandmark { - // The name of this landmark, for example, left_hand, right_shoulder. - string name = 1; - - // The 2D point of the detected landmark using the normalized image - // coordindate system. The normalized coordinates have the range from 0 to 1. - NormalizedVertex point = 2; - - // The confidence score of the detected landmark. Range [0, 1]. - float confidence = 3; -} - -// Annotation results for a single video. -message VideoAnnotationResults { - // Video file location in - // [Cloud Storage](https://cloud.google.com/storage/). - string input_uri = 1; - - // Video segment on which the annotation is run. - VideoSegment segment = 10; - - // Topical label annotations on video level or user-specified segment level. - // There is exactly one element for each unique label. - repeated LabelAnnotation segment_label_annotations = 2; - - // Presence label annotations on video level or user-specified segment level. - // There is exactly one element for each unique label. Compared to the - // existing topical `segment_label_annotations`, this field presents more - // fine-grained, segment-level labels detected in video content and is made - // available only when the client sets `LabelDetectionConfig.model` to - // "builtin/latest" in the request. - repeated LabelAnnotation segment_presence_label_annotations = 23; - - // Topical label annotations on shot level. - // There is exactly one element for each unique label. - repeated LabelAnnotation shot_label_annotations = 3; - - // Presence label annotations on shot level. There is exactly one element for - // each unique label. Compared to the existing topical - // `shot_label_annotations`, this field presents more fine-grained, shot-level - // labels detected in video content and is made available only when the client - // sets `LabelDetectionConfig.model` to "builtin/latest" in the request. - repeated LabelAnnotation shot_presence_label_annotations = 24; - - // Label annotations on frame level. - // There is exactly one element for each unique label. - repeated LabelAnnotation frame_label_annotations = 4; - - // Deprecated. Please use `face_detection_annotations` instead. - repeated FaceAnnotation face_annotations = 5 [deprecated = true]; - - // Face detection annotations. - repeated FaceDetectionAnnotation face_detection_annotations = 13; - - // Shot annotations. Each shot is represented as a video segment. - repeated VideoSegment shot_annotations = 6; - - // Explicit content annotation. - ExplicitContentAnnotation explicit_annotation = 7; - - // Speech transcription. - repeated SpeechTranscription speech_transcriptions = 11; - - // OCR text detection and tracking. - // Annotations for list of detected text snippets. Each will have list of - // frame information associated with it. - repeated TextAnnotation text_annotations = 12; - - // Annotations for list of objects detected and tracked in video. - repeated ObjectTrackingAnnotation object_annotations = 14; - - // Annotations for list of logos detected, tracked and recognized in video. - repeated LogoRecognitionAnnotation logo_recognition_annotations = 19; - - // Person detection annotations. - repeated PersonDetectionAnnotation person_detection_annotations = 20; - - // If set, indicates an error. Note that for a single `AnnotateVideoRequest` - // some videos may succeed and some may fail. - google.rpc.Status error = 9; -} - -// Video annotation response. Included in the `response` -// field of the `Operation` returned by the `GetOperation` -// call of the `google::longrunning::Operations` service. -message AnnotateVideoResponse { - // Annotation results for all videos specified in `AnnotateVideoRequest`. - repeated VideoAnnotationResults annotation_results = 1; -} - -// Annotation progress for a single video. -message VideoAnnotationProgress { - // Video file location in - // [Cloud Storage](https://cloud.google.com/storage/). - string input_uri = 1; - - // Approximate percentage processed thus far. Guaranteed to be - // 100 when fully processed. - int32 progress_percent = 2; - - // Time when the request was received. - google.protobuf.Timestamp start_time = 3; - - // Time of the most recent update. - google.protobuf.Timestamp update_time = 4; - - // Specifies which feature is being tracked if the request contains more than - // one feature. - Feature feature = 5; - - // Specifies which segment is being tracked if the request contains more than - // one segment. - VideoSegment segment = 6; -} - -// Video annotation progress. Included in the `metadata` -// field of the `Operation` returned by the `GetOperation` -// call of the `google::longrunning::Operations` service. -message AnnotateVideoProgress { - // Progress metadata for all videos specified in `AnnotateVideoRequest`. - repeated VideoAnnotationProgress annotation_progress = 1; -} - -// Config for SPEECH_TRANSCRIPTION. -message SpeechTranscriptionConfig { - // Required. *Required* The language of the supplied audio as a - // [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag. - // Example: "en-US". - // See [Language Support](https://cloud.google.com/speech/docs/languages) - // for a list of the currently supported language codes. - string language_code = 1 [(google.api.field_behavior) = REQUIRED]; - - // Optional. Maximum number of recognition hypotheses to be returned. - // Specifically, the maximum number of `SpeechRecognitionAlternative` messages - // within each `SpeechTranscription`. The server may return fewer than - // `max_alternatives`. Valid values are `0`-`30`. A value of `0` or `1` will - // return a maximum of one. If omitted, will return a maximum of one. - int32 max_alternatives = 2 [(google.api.field_behavior) = OPTIONAL]; - - // Optional. If set to `true`, the server will attempt to filter out - // profanities, replacing all but the initial character in each filtered word - // with asterisks, e.g. "f***". If set to `false` or omitted, profanities - // won't be filtered out. - bool filter_profanity = 3 [(google.api.field_behavior) = OPTIONAL]; - - // Optional. A means to provide context to assist the speech recognition. - repeated SpeechContext speech_contexts = 4 - [(google.api.field_behavior) = OPTIONAL]; - - // Optional. If 'true', adds punctuation to recognition result hypotheses. - // This feature is only available in select languages. Setting this for - // requests in other languages has no effect at all. The default 'false' value - // does not add punctuation to result hypotheses. NOTE: "This is currently - // offered as an experimental service, complimentary to all users. In the - // future this may be exclusively available as a premium feature." - bool enable_automatic_punctuation = 5 - [(google.api.field_behavior) = OPTIONAL]; - - // Optional. For file formats, such as MXF or MKV, supporting multiple audio - // tracks, specify up to two tracks. Default: track 0. - repeated int32 audio_tracks = 6 [(google.api.field_behavior) = OPTIONAL]; - - // Optional. If 'true', enables speaker detection for each recognized word in - // the top alternative of the recognition result using a speaker_tag provided - // in the WordInfo. - // Note: When this is true, we send all the words from the beginning of the - // audio for the top alternative in every consecutive response. - // This is done in order to improve our speaker tags as our models learn to - // identify the speakers in the conversation over time. - bool enable_speaker_diarization = 7 [(google.api.field_behavior) = OPTIONAL]; - - // Optional. If set, specifies the estimated number of speakers in the - // conversation. If not set, defaults to '2'. Ignored unless - // enable_speaker_diarization is set to true. - int32 diarization_speaker_count = 8 [(google.api.field_behavior) = OPTIONAL]; - - // Optional. If `true`, the top result includes a list of words and the - // confidence for those words. If `false`, no word-level confidence - // information is returned. The default is `false`. - bool enable_word_confidence = 9 [(google.api.field_behavior) = OPTIONAL]; -} - -// Provides "hints" to the speech recognizer to favor specific words and phrases -// in the results. -message SpeechContext { - // Optional. A list of strings containing words and phrases "hints" so that - // the speech recognition is more likely to recognize them. This can be used - // to improve the accuracy for specific words and phrases, for example, if - // specific commands are typically spoken by the user. This can also be used - // to add additional words to the vocabulary of the recognizer. See - // [usage limits](https://cloud.google.com/speech/limits#content). - repeated string phrases = 1 [(google.api.field_behavior) = OPTIONAL]; -} - -// A speech recognition result corresponding to a portion of the audio. -message SpeechTranscription { - // May contain one or more recognition hypotheses (up to the maximum specified - // in `max_alternatives`). These alternatives are ordered in terms of - // accuracy, with the top (first) alternative being the most probable, as - // ranked by the recognizer. - repeated SpeechRecognitionAlternative alternatives = 1; - - // Output only. The [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) - // language tag of the language in this result. This language code was - // detected to have the most likelihood of being spoken in the audio. - string language_code = 2 [(google.api.field_behavior) = OUTPUT_ONLY]; -} - -// Alternative hypotheses (a.k.a. n-best list). -message SpeechRecognitionAlternative { - // Transcript text representing the words that the user spoke. - string transcript = 1; - - // Output only. The confidence estimate between 0.0 and 1.0. A higher number - // indicates an estimated greater likelihood that the recognized words are - // correct. This field is set only for the top alternative. - // This field is not guaranteed to be accurate and users should not rely on it - // to be always provided. - // The default of 0.0 is a sentinel value indicating `confidence` was not set. - float confidence = 2 [(google.api.field_behavior) = OUTPUT_ONLY]; - - // Output only. A list of word-specific information for each recognized word. - // Note: When `enable_speaker_diarization` is set to true, you will see all - // the words from the beginning of the audio. - repeated WordInfo words = 3 [(google.api.field_behavior) = OUTPUT_ONLY]; -} - -// Word-specific information for recognized words. Word information is only -// included in the response when certain request parameters are set, such -// as `enable_word_time_offsets`. -message WordInfo { - // Time offset relative to the beginning of the audio, and - // corresponding to the start of the spoken word. This field is only set if - // `enable_word_time_offsets=true` and only in the top hypothesis. This is an - // experimental feature and the accuracy of the time offset can vary. - google.protobuf.Duration start_time = 1; - - // Time offset relative to the beginning of the audio, and - // corresponding to the end of the spoken word. This field is only set if - // `enable_word_time_offsets=true` and only in the top hypothesis. This is an - // experimental feature and the accuracy of the time offset can vary. - google.protobuf.Duration end_time = 2; - - // The word corresponding to this set of information. - string word = 3; - - // Output only. The confidence estimate between 0.0 and 1.0. A higher number - // indicates an estimated greater likelihood that the recognized words are - // correct. This field is set only for the top alternative. - // This field is not guaranteed to be accurate and users should not rely on it - // to be always provided. - // The default of 0.0 is a sentinel value indicating `confidence` was not set. - float confidence = 4 [(google.api.field_behavior) = OUTPUT_ONLY]; - - // Output only. A distinct integer value is assigned for every speaker within - // the audio. This field specifies which one of those speakers was detected to - // have spoken this word. Value ranges from 1 up to diarization_speaker_count, - // and is only set if speaker diarization is enabled. - int32 speaker_tag = 5 [(google.api.field_behavior) = OUTPUT_ONLY]; -} - -// A vertex represents a 2D point in the image. -// NOTE: the normalized vertex coordinates are relative to the original image -// and range from 0 to 1. -message NormalizedVertex { - // X coordinate. - float x = 1; - - // Y coordinate. - float y = 2; -} - -// Normalized bounding polygon for text (that might not be aligned with axis). -// Contains list of the corner points in clockwise order starting from -// top-left corner. For example, for a rectangular bounding box: -// When the text is horizontal it might look like: -// 0----1 -// | | -// 3----2 -// -// When it's clockwise rotated 180 degrees around the top-left corner it -// becomes: -// 2----3 -// | | -// 1----0 -// -// and the vertex order will still be (0, 1, 2, 3). Note that values can be less -// than 0, or greater than 1 due to trignometric calculations for location of -// the box. -message NormalizedBoundingPoly { - // Normalized vertices of the bounding polygon. - repeated NormalizedVertex vertices = 1; -} - -// Video segment level annotation results for text detection. -message TextSegment { - // Video segment where a text snippet was detected. - VideoSegment segment = 1; - - // Confidence for the track of detected text. It is calculated as the highest - // over all frames where OCR detected text appears. - float confidence = 2; - - // Information related to the frames where OCR detected text appears. - repeated TextFrame frames = 3; -} - -// Video frame level annotation results for text annotation (OCR). -// Contains information regarding timestamp and bounding box locations for the -// frames containing detected OCR text snippets. -message TextFrame { - // Bounding polygon of the detected text for this frame. - NormalizedBoundingPoly rotated_bounding_box = 1; - - // Timestamp of this frame. - google.protobuf.Duration time_offset = 2; -} - -// Annotations related to one detected OCR text snippet. This will contain the -// corresponding text, confidence value, and frame level information for each -// detection. -message TextAnnotation { - // The detected text. - string text = 1; - - // All video segments where OCR detected text appears. - repeated TextSegment segments = 2; - - // Feature version. - string version = 3; -} - -// Video frame level annotations for object detection and tracking. This field -// stores per frame location, time offset, and confidence. -message ObjectTrackingFrame { - // The normalized bounding box location of this object track for the frame. - NormalizedBoundingBox normalized_bounding_box = 1; - - // The timestamp of the frame in microseconds. - google.protobuf.Duration time_offset = 2; -} - -// Annotations corresponding to one tracked object. -message ObjectTrackingAnnotation { - // Different representation of tracking info in non-streaming batch - // and streaming modes. - oneof track_info { - // Non-streaming batch mode ONLY. - // Each object track corresponds to one video segment where it appears. - VideoSegment segment = 3; - - // Streaming mode ONLY. - // In streaming mode, we do not know the end time of a tracked object - // before it is completed. Hence, there is no VideoSegment info returned. - // Instead, we provide a unique identifiable integer track_id so that - // the customers can correlate the results of the ongoing - // ObjectTrackAnnotation of the same track_id over time. - int64 track_id = 5; - } - - // Entity to specify the object category that this track is labeled as. - Entity entity = 1; - - // Object category's labeling confidence of this track. - float confidence = 4; - - // Information corresponding to all frames where this object track appears. - // Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame - // messages in frames. - // Streaming mode: it can only be one ObjectTrackingFrame message in frames. - repeated ObjectTrackingFrame frames = 2; - - // Feature version. - string version = 6; -} - -// Annotation corresponding to one detected, tracked and recognized logo class. -message LogoRecognitionAnnotation { - // Entity category information to specify the logo class that all the logo - // tracks within this LogoRecognitionAnnotation are recognized as. - Entity entity = 1; - - // All logo tracks where the recognized logo appears. Each track corresponds - // to one logo instance appearing in consecutive frames. - repeated Track tracks = 2; - - // All video segments where the recognized logo appears. There might be - // multiple instances of the same logo class appearing in one VideoSegment. - repeated VideoSegment segments = 3; -} diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/proto/video_intelligence_pb2.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/proto/video_intelligence_pb2.py deleted file mode 100644 index 7912c6eec12c..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/proto/video_intelligence_pb2.py +++ /dev/null @@ -1,5444 +0,0 @@ -# -*- coding: utf-8 -*- -# Generated by the protocol buffer compiler. DO NOT EDIT! -# source: google/cloud/videointelligence_v1/proto/video_intelligence.proto -"""Generated protocol buffer code.""" -from google.protobuf.internal import enum_type_wrapper -from google.protobuf import descriptor as _descriptor -from google.protobuf import message as _message -from google.protobuf import reflection as _reflection -from google.protobuf import symbol_database as _symbol_database - -# @@protoc_insertion_point(imports) - -_sym_db = _symbol_database.Default() - - -from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2 -from google.api import client_pb2 as google_dot_api_dot_client__pb2 -from google.api import field_behavior_pb2 as google_dot_api_dot_field__behavior__pb2 -from google.longrunning import ( - operations_pb2 as google_dot_longrunning_dot_operations__pb2, -) -from google.protobuf import duration_pb2 as google_dot_protobuf_dot_duration__pb2 -from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2 -from google.rpc import status_pb2 as google_dot_rpc_dot_status__pb2 - - -DESCRIPTOR = _descriptor.FileDescriptor( - name="google/cloud/videointelligence_v1/proto/video_intelligence.proto", - package="google.cloud.videointelligence.v1", - syntax="proto3", - serialized_options=b"\n%com.google.cloud.videointelligence.v1B\035VideoIntelligenceServiceProtoP\001ZRgoogle.golang.org/genproto/googleapis/cloud/videointelligence/v1;videointelligence\252\002!Google.Cloud.VideoIntelligence.V1\312\002!Google\\Cloud\\VideoIntelligence\\V1\352\002$Google::Cloud::VideoIntelligence::V1", - create_key=_descriptor._internal_create_key, - serialized_pb=b'\n@google/cloud/videointelligence_v1/proto/video_intelligence.proto\x12!google.cloud.videointelligence.v1\x1a\x1cgoogle/api/annotations.proto\x1a\x17google/api/client.proto\x1a\x1fgoogle/api/field_behavior.proto\x1a#google/longrunning/operations.proto\x1a\x1egoogle/protobuf/duration.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x17google/rpc/status.proto"\xfe\x01\n\x14\x41nnotateVideoRequest\x12\x11\n\tinput_uri\x18\x01 \x01(\t\x12\x15\n\rinput_content\x18\x06 \x01(\x0c\x12\x41\n\x08\x66\x65\x61tures\x18\x02 \x03(\x0e\x32*.google.cloud.videointelligence.v1.FeatureB\x03\xe0\x41\x02\x12\x46\n\rvideo_context\x18\x03 \x01(\x0b\x32/.google.cloud.videointelligence.v1.VideoContext\x12\x17\n\noutput_uri\x18\x04 \x01(\tB\x03\xe0\x41\x01\x12\x18\n\x0blocation_id\x18\x05 \x01(\tB\x03\xe0\x41\x01"\xc1\x06\n\x0cVideoContext\x12\x41\n\x08segments\x18\x01 \x03(\x0b\x32/.google.cloud.videointelligence.v1.VideoSegment\x12W\n\x16label_detection_config\x18\x02 \x01(\x0b\x32\x37.google.cloud.videointelligence.v1.LabelDetectionConfig\x12\x62\n\x1cshot_change_detection_config\x18\x03 \x01(\x0b\x32<.google.cloud.videointelligence.v1.ShotChangeDetectionConfig\x12l\n!explicit_content_detection_config\x18\x04 \x01(\x0b\x32\x41.google.cloud.videointelligence.v1.ExplicitContentDetectionConfig\x12U\n\x15\x66\x61\x63\x65_detection_config\x18\x05 \x01(\x0b\x32\x36.google.cloud.videointelligence.v1.FaceDetectionConfig\x12\x61\n\x1bspeech_transcription_config\x18\x06 \x01(\x0b\x32<.google.cloud.videointelligence.v1.SpeechTranscriptionConfig\x12U\n\x15text_detection_config\x18\x08 \x01(\x0b\x32\x36.google.cloud.videointelligence.v1.TextDetectionConfig\x12Y\n\x17person_detection_config\x18\x0b \x01(\x0b\x32\x38.google.cloud.videointelligence.v1.PersonDetectionConfig\x12W\n\x16object_tracking_config\x18\r \x01(\x0b\x32\x37.google.cloud.videointelligence.v1.ObjectTrackingConfig"\xdd\x01\n\x14LabelDetectionConfig\x12S\n\x14label_detection_mode\x18\x01 \x01(\x0e\x32\x35.google.cloud.videointelligence.v1.LabelDetectionMode\x12\x19\n\x11stationary_camera\x18\x02 \x01(\x08\x12\r\n\x05model\x18\x03 \x01(\t\x12"\n\x1a\x66rame_confidence_threshold\x18\x04 \x01(\x02\x12"\n\x1avideo_confidence_threshold\x18\x05 \x01(\x02"*\n\x19ShotChangeDetectionConfig\x12\r\n\x05model\x18\x01 \x01(\t"%\n\x14ObjectTrackingConfig\x12\r\n\x05model\x18\x01 \x01(\t"`\n\x13\x46\x61\x63\x65\x44\x65tectionConfig\x12\r\n\x05model\x18\x01 \x01(\t\x12\x1e\n\x16include_bounding_boxes\x18\x02 \x01(\x08\x12\x1a\n\x12include_attributes\x18\x05 \x01(\x08"s\n\x15PersonDetectionConfig\x12\x1e\n\x16include_bounding_boxes\x18\x01 \x01(\x08\x12\x1e\n\x16include_pose_landmarks\x18\x02 \x01(\x08\x12\x1a\n\x12include_attributes\x18\x03 \x01(\x08"/\n\x1e\x45xplicitContentDetectionConfig\x12\r\n\x05model\x18\x01 \x01(\t"<\n\x13TextDetectionConfig\x12\x16\n\x0elanguage_hints\x18\x01 \x03(\t\x12\r\n\x05model\x18\x02 \x01(\t"x\n\x0cVideoSegment\x12\x34\n\x11start_time_offset\x18\x01 \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x32\n\x0f\x65nd_time_offset\x18\x02 \x01(\x0b\x32\x19.google.protobuf.Duration"d\n\x0cLabelSegment\x12@\n\x07segment\x18\x01 \x01(\x0b\x32/.google.cloud.videointelligence.v1.VideoSegment\x12\x12\n\nconfidence\x18\x02 \x01(\x02"P\n\nLabelFrame\x12.\n\x0btime_offset\x18\x01 \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x12\n\nconfidence\x18\x02 \x01(\x02"G\n\x06\x45ntity\x12\x11\n\tentity_id\x18\x01 \x01(\t\x12\x13\n\x0b\x64\x65scription\x18\x02 \x01(\t\x12\x15\n\rlanguage_code\x18\x03 \x01(\t"\xa5\x02\n\x0fLabelAnnotation\x12\x39\n\x06\x65ntity\x18\x01 \x01(\x0b\x32).google.cloud.videointelligence.v1.Entity\x12\x44\n\x11\x63\x61tegory_entities\x18\x02 \x03(\x0b\x32).google.cloud.videointelligence.v1.Entity\x12\x41\n\x08segments\x18\x03 \x03(\x0b\x32/.google.cloud.videointelligence.v1.LabelSegment\x12=\n\x06\x66rames\x18\x04 \x03(\x0b\x32-.google.cloud.videointelligence.v1.LabelFrame\x12\x0f\n\x07version\x18\x05 \x01(\t"\x95\x01\n\x14\x45xplicitContentFrame\x12.\n\x0btime_offset\x18\x01 \x01(\x0b\x32\x19.google.protobuf.Duration\x12M\n\x16pornography_likelihood\x18\x02 \x01(\x0e\x32-.google.cloud.videointelligence.v1.Likelihood"u\n\x19\x45xplicitContentAnnotation\x12G\n\x06\x66rames\x18\x01 \x03(\x0b\x32\x37.google.cloud.videointelligence.v1.ExplicitContentFrame\x12\x0f\n\x07version\x18\x02 \x01(\t"Q\n\x15NormalizedBoundingBox\x12\x0c\n\x04left\x18\x01 \x01(\x02\x12\x0b\n\x03top\x18\x02 \x01(\x02\x12\r\n\x05right\x18\x03 \x01(\x02\x12\x0e\n\x06\x62ottom\x18\x04 \x01(\x02"*\n\x17\x46\x61\x63\x65\x44\x65tectionAnnotation\x12\x0f\n\x07version\x18\x05 \x01(\t"f\n\x19PersonDetectionAnnotation\x12\x38\n\x06tracks\x18\x01 \x03(\x0b\x32(.google.cloud.videointelligence.v1.Track\x12\x0f\n\x07version\x18\x02 \x01(\t"O\n\x0b\x46\x61\x63\x65Segment\x12@\n\x07segment\x18\x01 \x01(\x0b\x32/.google.cloud.videointelligence.v1.VideoSegment"\x9c\x01\n\tFaceFrame\x12[\n\x19normalized_bounding_boxes\x18\x01 \x03(\x0b\x32\x38.google.cloud.videointelligence.v1.NormalizedBoundingBox\x12.\n\x0btime_offset\x18\x02 \x01(\x0b\x32\x19.google.protobuf.Duration:\x02\x18\x01"\xa7\x01\n\x0e\x46\x61\x63\x65\x41nnotation\x12\x11\n\tthumbnail\x18\x01 \x01(\x0c\x12@\n\x08segments\x18\x02 \x03(\x0b\x32..google.cloud.videointelligence.v1.FaceSegment\x12<\n\x06\x66rames\x18\x03 \x03(\x0b\x32,.google.cloud.videointelligence.v1.FaceFrame:\x02\x18\x01"\xba\x02\n\x11TimestampedObject\x12Y\n\x17normalized_bounding_box\x18\x01 \x01(\x0b\x32\x38.google.cloud.videointelligence.v1.NormalizedBoundingBox\x12.\n\x0btime_offset\x18\x02 \x01(\x0b\x32\x19.google.protobuf.Duration\x12M\n\nattributes\x18\x03 \x03(\x0b\x32\x34.google.cloud.videointelligence.v1.DetectedAttributeB\x03\xe0\x41\x01\x12K\n\tlandmarks\x18\x04 \x03(\x0b\x32\x33.google.cloud.videointelligence.v1.DetectedLandmarkB\x03\xe0\x41\x01"\x84\x02\n\x05Track\x12@\n\x07segment\x18\x01 \x01(\x0b\x32/.google.cloud.videointelligence.v1.VideoSegment\x12Q\n\x13timestamped_objects\x18\x02 \x03(\x0b\x32\x34.google.cloud.videointelligence.v1.TimestampedObject\x12M\n\nattributes\x18\x03 \x03(\x0b\x32\x34.google.cloud.videointelligence.v1.DetectedAttributeB\x03\xe0\x41\x01\x12\x17\n\nconfidence\x18\x04 \x01(\x02\x42\x03\xe0\x41\x01"D\n\x11\x44\x65tectedAttribute\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x12\n\nconfidence\x18\x02 \x01(\x02\x12\r\n\x05value\x18\x03 \x01(\t"x\n\x10\x44\x65tectedLandmark\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x42\n\x05point\x18\x02 \x01(\x0b\x32\x33.google.cloud.videointelligence.v1.NormalizedVertex\x12\x12\n\nconfidence\x18\x03 \x01(\x02"\xe9\n\n\x16VideoAnnotationResults\x12\x11\n\tinput_uri\x18\x01 \x01(\t\x12@\n\x07segment\x18\n \x01(\x0b\x32/.google.cloud.videointelligence.v1.VideoSegment\x12U\n\x19segment_label_annotations\x18\x02 \x03(\x0b\x32\x32.google.cloud.videointelligence.v1.LabelAnnotation\x12^\n"segment_presence_label_annotations\x18\x17 \x03(\x0b\x32\x32.google.cloud.videointelligence.v1.LabelAnnotation\x12R\n\x16shot_label_annotations\x18\x03 \x03(\x0b\x32\x32.google.cloud.videointelligence.v1.LabelAnnotation\x12[\n\x1fshot_presence_label_annotations\x18\x18 \x03(\x0b\x32\x32.google.cloud.videointelligence.v1.LabelAnnotation\x12S\n\x17\x66rame_label_annotations\x18\x04 \x03(\x0b\x32\x32.google.cloud.videointelligence.v1.LabelAnnotation\x12O\n\x10\x66\x61\x63\x65_annotations\x18\x05 \x03(\x0b\x32\x31.google.cloud.videointelligence.v1.FaceAnnotationB\x02\x18\x01\x12^\n\x1a\x66\x61\x63\x65_detection_annotations\x18\r \x03(\x0b\x32:.google.cloud.videointelligence.v1.FaceDetectionAnnotation\x12I\n\x10shot_annotations\x18\x06 \x03(\x0b\x32/.google.cloud.videointelligence.v1.VideoSegment\x12Y\n\x13\x65xplicit_annotation\x18\x07 \x01(\x0b\x32<.google.cloud.videointelligence.v1.ExplicitContentAnnotation\x12U\n\x15speech_transcriptions\x18\x0b \x03(\x0b\x32\x36.google.cloud.videointelligence.v1.SpeechTranscription\x12K\n\x10text_annotations\x18\x0c \x03(\x0b\x32\x31.google.cloud.videointelligence.v1.TextAnnotation\x12W\n\x12object_annotations\x18\x0e \x03(\x0b\x32;.google.cloud.videointelligence.v1.ObjectTrackingAnnotation\x12\x62\n\x1clogo_recognition_annotations\x18\x13 \x03(\x0b\x32<.google.cloud.videointelligence.v1.LogoRecognitionAnnotation\x12\x62\n\x1cperson_detection_annotations\x18\x14 \x03(\x0b\x32<.google.cloud.videointelligence.v1.PersonDetectionAnnotation\x12!\n\x05\x65rror\x18\t \x01(\x0b\x32\x12.google.rpc.Status"n\n\x15\x41nnotateVideoResponse\x12U\n\x12\x61nnotation_results\x18\x01 \x03(\x0b\x32\x39.google.cloud.videointelligence.v1.VideoAnnotationResults"\xa6\x02\n\x17VideoAnnotationProgress\x12\x11\n\tinput_uri\x18\x01 \x01(\t\x12\x18\n\x10progress_percent\x18\x02 \x01(\x05\x12.\n\nstart_time\x18\x03 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12/\n\x0bupdate_time\x18\x04 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12;\n\x07\x66\x65\x61ture\x18\x05 \x01(\x0e\x32*.google.cloud.videointelligence.v1.Feature\x12@\n\x07segment\x18\x06 \x01(\x0b\x32/.google.cloud.videointelligence.v1.VideoSegment"p\n\x15\x41nnotateVideoProgress\x12W\n\x13\x61nnotation_progress\x18\x01 \x03(\x0b\x32:.google.cloud.videointelligence.v1.VideoAnnotationProgress"\x81\x03\n\x19SpeechTranscriptionConfig\x12\x1a\n\rlanguage_code\x18\x01 \x01(\tB\x03\xe0\x41\x02\x12\x1d\n\x10max_alternatives\x18\x02 \x01(\x05\x42\x03\xe0\x41\x01\x12\x1d\n\x10\x66ilter_profanity\x18\x03 \x01(\x08\x42\x03\xe0\x41\x01\x12N\n\x0fspeech_contexts\x18\x04 \x03(\x0b\x32\x30.google.cloud.videointelligence.v1.SpeechContextB\x03\xe0\x41\x01\x12)\n\x1c\x65nable_automatic_punctuation\x18\x05 \x01(\x08\x42\x03\xe0\x41\x01\x12\x19\n\x0c\x61udio_tracks\x18\x06 \x03(\x05\x42\x03\xe0\x41\x01\x12\'\n\x1a\x65nable_speaker_diarization\x18\x07 \x01(\x08\x42\x03\xe0\x41\x01\x12&\n\x19\x64iarization_speaker_count\x18\x08 \x01(\x05\x42\x03\xe0\x41\x01\x12#\n\x16\x65nable_word_confidence\x18\t \x01(\x08\x42\x03\xe0\x41\x01"%\n\rSpeechContext\x12\x14\n\x07phrases\x18\x01 \x03(\tB\x03\xe0\x41\x01"\x88\x01\n\x13SpeechTranscription\x12U\n\x0c\x61lternatives\x18\x01 \x03(\x0b\x32?.google.cloud.videointelligence.v1.SpeechRecognitionAlternative\x12\x1a\n\rlanguage_code\x18\x02 \x01(\tB\x03\xe0\x41\x03"\x8c\x01\n\x1cSpeechRecognitionAlternative\x12\x12\n\ntranscript\x18\x01 \x01(\t\x12\x17\n\nconfidence\x18\x02 \x01(\x02\x42\x03\xe0\x41\x03\x12?\n\x05words\x18\x03 \x03(\x0b\x32+.google.cloud.videointelligence.v1.WordInfoB\x03\xe0\x41\x03"\xa7\x01\n\x08WordInfo\x12-\n\nstart_time\x18\x01 \x01(\x0b\x32\x19.google.protobuf.Duration\x12+\n\x08\x65nd_time\x18\x02 \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x0c\n\x04word\x18\x03 \x01(\t\x12\x17\n\nconfidence\x18\x04 \x01(\x02\x42\x03\xe0\x41\x03\x12\x18\n\x0bspeaker_tag\x18\x05 \x01(\x05\x42\x03\xe0\x41\x03"(\n\x10NormalizedVertex\x12\t\n\x01x\x18\x01 \x01(\x02\x12\t\n\x01y\x18\x02 \x01(\x02"_\n\x16NormalizedBoundingPoly\x12\x45\n\x08vertices\x18\x01 \x03(\x0b\x32\x33.google.cloud.videointelligence.v1.NormalizedVertex"\xa1\x01\n\x0bTextSegment\x12@\n\x07segment\x18\x01 \x01(\x0b\x32/.google.cloud.videointelligence.v1.VideoSegment\x12\x12\n\nconfidence\x18\x02 \x01(\x02\x12<\n\x06\x66rames\x18\x03 \x03(\x0b\x32,.google.cloud.videointelligence.v1.TextFrame"\x94\x01\n\tTextFrame\x12W\n\x14rotated_bounding_box\x18\x01 \x01(\x0b\x32\x39.google.cloud.videointelligence.v1.NormalizedBoundingPoly\x12.\n\x0btime_offset\x18\x02 \x01(\x0b\x32\x19.google.protobuf.Duration"q\n\x0eTextAnnotation\x12\x0c\n\x04text\x18\x01 \x01(\t\x12@\n\x08segments\x18\x02 \x03(\x0b\x32..google.cloud.videointelligence.v1.TextSegment\x12\x0f\n\x07version\x18\x03 \x01(\t"\xa0\x01\n\x13ObjectTrackingFrame\x12Y\n\x17normalized_bounding_box\x18\x01 \x01(\x0b\x32\x38.google.cloud.videointelligence.v1.NormalizedBoundingBox\x12.\n\x0btime_offset\x18\x02 \x01(\x0b\x32\x19.google.protobuf.Duration"\xa8\x02\n\x18ObjectTrackingAnnotation\x12\x42\n\x07segment\x18\x03 \x01(\x0b\x32/.google.cloud.videointelligence.v1.VideoSegmentH\x00\x12\x12\n\x08track_id\x18\x05 \x01(\x03H\x00\x12\x39\n\x06\x65ntity\x18\x01 \x01(\x0b\x32).google.cloud.videointelligence.v1.Entity\x12\x12\n\nconfidence\x18\x04 \x01(\x02\x12\x46\n\x06\x66rames\x18\x02 \x03(\x0b\x32\x36.google.cloud.videointelligence.v1.ObjectTrackingFrame\x12\x0f\n\x07version\x18\x06 \x01(\tB\x0c\n\ntrack_info"\xd3\x01\n\x19LogoRecognitionAnnotation\x12\x39\n\x06\x65ntity\x18\x01 \x01(\x0b\x32).google.cloud.videointelligence.v1.Entity\x12\x38\n\x06tracks\x18\x02 \x03(\x0b\x32(.google.cloud.videointelligence.v1.Track\x12\x41\n\x08segments\x18\x03 \x03(\x0b\x32/.google.cloud.videointelligence.v1.VideoSegment*\xf5\x01\n\x07\x46\x65\x61ture\x12\x17\n\x13\x46\x45\x41TURE_UNSPECIFIED\x10\x00\x12\x13\n\x0fLABEL_DETECTION\x10\x01\x12\x19\n\x15SHOT_CHANGE_DETECTION\x10\x02\x12\x1e\n\x1a\x45XPLICIT_CONTENT_DETECTION\x10\x03\x12\x12\n\x0e\x46\x41\x43\x45_DETECTION\x10\x04\x12\x18\n\x14SPEECH_TRANSCRIPTION\x10\x06\x12\x12\n\x0eTEXT_DETECTION\x10\x07\x12\x13\n\x0fOBJECT_TRACKING\x10\t\x12\x14\n\x10LOGO_RECOGNITION\x10\x0c\x12\x14\n\x10PERSON_DETECTION\x10\x0e*r\n\x12LabelDetectionMode\x12$\n LABEL_DETECTION_MODE_UNSPECIFIED\x10\x00\x12\r\n\tSHOT_MODE\x10\x01\x12\x0e\n\nFRAME_MODE\x10\x02\x12\x17\n\x13SHOT_AND_FRAME_MODE\x10\x03*t\n\nLikelihood\x12\x1a\n\x16LIKELIHOOD_UNSPECIFIED\x10\x00\x12\x11\n\rVERY_UNLIKELY\x10\x01\x12\x0c\n\x08UNLIKELY\x10\x02\x12\x0c\n\x08POSSIBLE\x10\x03\x12\n\n\x06LIKELY\x10\x04\x12\x0f\n\x0bVERY_LIKELY\x10\x05\x32\xc0\x02\n\x18VideoIntelligenceService\x12\xcd\x01\n\rAnnotateVideo\x12\x37.google.cloud.videointelligence.v1.AnnotateVideoRequest\x1a\x1d.google.longrunning.Operation"d\x82\xd3\xe4\x93\x02\x18"\x13/v1/videos:annotate:\x01*\xda\x41\x12input_uri,features\xca\x41.\n\x15\x41nnotateVideoResponse\x12\x15\x41nnotateVideoProgress\x1aT\xca\x41 videointelligence.googleapis.com\xd2\x41.https://www.googleapis.com/auth/cloud-platformB\x8b\x02\n%com.google.cloud.videointelligence.v1B\x1dVideoIntelligenceServiceProtoP\x01ZRgoogle.golang.org/genproto/googleapis/cloud/videointelligence/v1;videointelligence\xaa\x02!Google.Cloud.VideoIntelligence.V1\xca\x02!Google\\Cloud\\VideoIntelligence\\V1\xea\x02$Google::Cloud::VideoIntelligence::V1b\x06proto3', - dependencies=[ - google_dot_api_dot_annotations__pb2.DESCRIPTOR, - google_dot_api_dot_client__pb2.DESCRIPTOR, - google_dot_api_dot_field__behavior__pb2.DESCRIPTOR, - google_dot_longrunning_dot_operations__pb2.DESCRIPTOR, - google_dot_protobuf_dot_duration__pb2.DESCRIPTOR, - google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR, - google_dot_rpc_dot_status__pb2.DESCRIPTOR, - ], -) - -_FEATURE = _descriptor.EnumDescriptor( - name="Feature", - full_name="google.cloud.videointelligence.v1.Feature", - filename=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - values=[ - _descriptor.EnumValueDescriptor( - name="FEATURE_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="LABEL_DETECTION", - index=1, - number=1, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="SHOT_CHANGE_DETECTION", - index=2, - number=2, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="EXPLICIT_CONTENT_DETECTION", - index=3, - number=3, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="FACE_DETECTION", - index=4, - number=4, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="SPEECH_TRANSCRIPTION", - index=5, - number=6, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="TEXT_DETECTION", - index=6, - number=7, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="OBJECT_TRACKING", - index=7, - number=9, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="LOGO_RECOGNITION", - index=8, - number=12, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="PERSON_DETECTION", - index=9, - number=14, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=8439, - serialized_end=8684, -) -_sym_db.RegisterEnumDescriptor(_FEATURE) - -Feature = enum_type_wrapper.EnumTypeWrapper(_FEATURE) -_LABELDETECTIONMODE = _descriptor.EnumDescriptor( - name="LabelDetectionMode", - full_name="google.cloud.videointelligence.v1.LabelDetectionMode", - filename=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - values=[ - _descriptor.EnumValueDescriptor( - name="LABEL_DETECTION_MODE_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="SHOT_MODE", - index=1, - number=1, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="FRAME_MODE", - index=2, - number=2, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="SHOT_AND_FRAME_MODE", - index=3, - number=3, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=8686, - serialized_end=8800, -) -_sym_db.RegisterEnumDescriptor(_LABELDETECTIONMODE) - -LabelDetectionMode = enum_type_wrapper.EnumTypeWrapper(_LABELDETECTIONMODE) -_LIKELIHOOD = _descriptor.EnumDescriptor( - name="Likelihood", - full_name="google.cloud.videointelligence.v1.Likelihood", - filename=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - values=[ - _descriptor.EnumValueDescriptor( - name="LIKELIHOOD_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="VERY_UNLIKELY", - index=1, - number=1, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="UNLIKELY", - index=2, - number=2, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="POSSIBLE", - index=3, - number=3, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="LIKELY", - index=4, - number=4, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="VERY_LIKELY", - index=5, - number=5, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=8802, - serialized_end=8918, -) -_sym_db.RegisterEnumDescriptor(_LIKELIHOOD) - -Likelihood = enum_type_wrapper.EnumTypeWrapper(_LIKELIHOOD) -FEATURE_UNSPECIFIED = 0 -LABEL_DETECTION = 1 -SHOT_CHANGE_DETECTION = 2 -EXPLICIT_CONTENT_DETECTION = 3 -FACE_DETECTION = 4 -SPEECH_TRANSCRIPTION = 6 -TEXT_DETECTION = 7 -OBJECT_TRACKING = 9 -LOGO_RECOGNITION = 12 -PERSON_DETECTION = 14 -LABEL_DETECTION_MODE_UNSPECIFIED = 0 -SHOT_MODE = 1 -FRAME_MODE = 2 -SHOT_AND_FRAME_MODE = 3 -LIKELIHOOD_UNSPECIFIED = 0 -VERY_UNLIKELY = 1 -UNLIKELY = 2 -POSSIBLE = 3 -LIKELY = 4 -VERY_LIKELY = 5 - - -_ANNOTATEVIDEOREQUEST = _descriptor.Descriptor( - name="AnnotateVideoRequest", - full_name="google.cloud.videointelligence.v1.AnnotateVideoRequest", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="input_uri", - full_name="google.cloud.videointelligence.v1.AnnotateVideoRequest.input_uri", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="input_content", - full_name="google.cloud.videointelligence.v1.AnnotateVideoRequest.input_content", - index=1, - number=6, - type=12, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"", - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="features", - full_name="google.cloud.videointelligence.v1.AnnotateVideoRequest.features", - index=2, - number=2, - type=14, - cpp_type=8, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\002", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="video_context", - full_name="google.cloud.videointelligence.v1.AnnotateVideoRequest.video_context", - index=3, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="output_uri", - full_name="google.cloud.videointelligence.v1.AnnotateVideoRequest.output_uri", - index=4, - number=4, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="location_id", - full_name="google.cloud.videointelligence.v1.AnnotateVideoRequest.location_id", - index=5, - number=5, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=319, - serialized_end=573, -) - - -_VIDEOCONTEXT = _descriptor.Descriptor( - name="VideoContext", - full_name="google.cloud.videointelligence.v1.VideoContext", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="segments", - full_name="google.cloud.videointelligence.v1.VideoContext.segments", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="label_detection_config", - full_name="google.cloud.videointelligence.v1.VideoContext.label_detection_config", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="shot_change_detection_config", - full_name="google.cloud.videointelligence.v1.VideoContext.shot_change_detection_config", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="explicit_content_detection_config", - full_name="google.cloud.videointelligence.v1.VideoContext.explicit_content_detection_config", - index=3, - number=4, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="face_detection_config", - full_name="google.cloud.videointelligence.v1.VideoContext.face_detection_config", - index=4, - number=5, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="speech_transcription_config", - full_name="google.cloud.videointelligence.v1.VideoContext.speech_transcription_config", - index=5, - number=6, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="text_detection_config", - full_name="google.cloud.videointelligence.v1.VideoContext.text_detection_config", - index=6, - number=8, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="person_detection_config", - full_name="google.cloud.videointelligence.v1.VideoContext.person_detection_config", - index=7, - number=11, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="object_tracking_config", - full_name="google.cloud.videointelligence.v1.VideoContext.object_tracking_config", - index=8, - number=13, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=576, - serialized_end=1409, -) - - -_LABELDETECTIONCONFIG = _descriptor.Descriptor( - name="LabelDetectionConfig", - full_name="google.cloud.videointelligence.v1.LabelDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="label_detection_mode", - full_name="google.cloud.videointelligence.v1.LabelDetectionConfig.label_detection_mode", - index=0, - number=1, - type=14, - cpp_type=8, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="stationary_camera", - full_name="google.cloud.videointelligence.v1.LabelDetectionConfig.stationary_camera", - index=1, - number=2, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="model", - full_name="google.cloud.videointelligence.v1.LabelDetectionConfig.model", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="frame_confidence_threshold", - full_name="google.cloud.videointelligence.v1.LabelDetectionConfig.frame_confidence_threshold", - index=3, - number=4, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="video_confidence_threshold", - full_name="google.cloud.videointelligence.v1.LabelDetectionConfig.video_confidence_threshold", - index=4, - number=5, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1412, - serialized_end=1633, -) - - -_SHOTCHANGEDETECTIONCONFIG = _descriptor.Descriptor( - name="ShotChangeDetectionConfig", - full_name="google.cloud.videointelligence.v1.ShotChangeDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="model", - full_name="google.cloud.videointelligence.v1.ShotChangeDetectionConfig.model", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1635, - serialized_end=1677, -) - - -_OBJECTTRACKINGCONFIG = _descriptor.Descriptor( - name="ObjectTrackingConfig", - full_name="google.cloud.videointelligence.v1.ObjectTrackingConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="model", - full_name="google.cloud.videointelligence.v1.ObjectTrackingConfig.model", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1679, - serialized_end=1716, -) - - -_FACEDETECTIONCONFIG = _descriptor.Descriptor( - name="FaceDetectionConfig", - full_name="google.cloud.videointelligence.v1.FaceDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="model", - full_name="google.cloud.videointelligence.v1.FaceDetectionConfig.model", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="include_bounding_boxes", - full_name="google.cloud.videointelligence.v1.FaceDetectionConfig.include_bounding_boxes", - index=1, - number=2, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="include_attributes", - full_name="google.cloud.videointelligence.v1.FaceDetectionConfig.include_attributes", - index=2, - number=5, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1718, - serialized_end=1814, -) - - -_PERSONDETECTIONCONFIG = _descriptor.Descriptor( - name="PersonDetectionConfig", - full_name="google.cloud.videointelligence.v1.PersonDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="include_bounding_boxes", - full_name="google.cloud.videointelligence.v1.PersonDetectionConfig.include_bounding_boxes", - index=0, - number=1, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="include_pose_landmarks", - full_name="google.cloud.videointelligence.v1.PersonDetectionConfig.include_pose_landmarks", - index=1, - number=2, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="include_attributes", - full_name="google.cloud.videointelligence.v1.PersonDetectionConfig.include_attributes", - index=2, - number=3, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1816, - serialized_end=1931, -) - - -_EXPLICITCONTENTDETECTIONCONFIG = _descriptor.Descriptor( - name="ExplicitContentDetectionConfig", - full_name="google.cloud.videointelligence.v1.ExplicitContentDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="model", - full_name="google.cloud.videointelligence.v1.ExplicitContentDetectionConfig.model", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1933, - serialized_end=1980, -) - - -_TEXTDETECTIONCONFIG = _descriptor.Descriptor( - name="TextDetectionConfig", - full_name="google.cloud.videointelligence.v1.TextDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="language_hints", - full_name="google.cloud.videointelligence.v1.TextDetectionConfig.language_hints", - index=0, - number=1, - type=9, - cpp_type=9, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="model", - full_name="google.cloud.videointelligence.v1.TextDetectionConfig.model", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1982, - serialized_end=2042, -) - - -_VIDEOSEGMENT = _descriptor.Descriptor( - name="VideoSegment", - full_name="google.cloud.videointelligence.v1.VideoSegment", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="start_time_offset", - full_name="google.cloud.videointelligence.v1.VideoSegment.start_time_offset", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="end_time_offset", - full_name="google.cloud.videointelligence.v1.VideoSegment.end_time_offset", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2044, - serialized_end=2164, -) - - -_LABELSEGMENT = _descriptor.Descriptor( - name="LabelSegment", - full_name="google.cloud.videointelligence.v1.LabelSegment", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="segment", - full_name="google.cloud.videointelligence.v1.LabelSegment.segment", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1.LabelSegment.confidence", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2166, - serialized_end=2266, -) - - -_LABELFRAME = _descriptor.Descriptor( - name="LabelFrame", - full_name="google.cloud.videointelligence.v1.LabelFrame", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="time_offset", - full_name="google.cloud.videointelligence.v1.LabelFrame.time_offset", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1.LabelFrame.confidence", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2268, - serialized_end=2348, -) - - -_ENTITY = _descriptor.Descriptor( - name="Entity", - full_name="google.cloud.videointelligence.v1.Entity", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="entity_id", - full_name="google.cloud.videointelligence.v1.Entity.entity_id", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="description", - full_name="google.cloud.videointelligence.v1.Entity.description", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="language_code", - full_name="google.cloud.videointelligence.v1.Entity.language_code", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2350, - serialized_end=2421, -) - - -_LABELANNOTATION = _descriptor.Descriptor( - name="LabelAnnotation", - full_name="google.cloud.videointelligence.v1.LabelAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="entity", - full_name="google.cloud.videointelligence.v1.LabelAnnotation.entity", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="category_entities", - full_name="google.cloud.videointelligence.v1.LabelAnnotation.category_entities", - index=1, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segments", - full_name="google.cloud.videointelligence.v1.LabelAnnotation.segments", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="frames", - full_name="google.cloud.videointelligence.v1.LabelAnnotation.frames", - index=3, - number=4, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="version", - full_name="google.cloud.videointelligence.v1.LabelAnnotation.version", - index=4, - number=5, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2424, - serialized_end=2717, -) - - -_EXPLICITCONTENTFRAME = _descriptor.Descriptor( - name="ExplicitContentFrame", - full_name="google.cloud.videointelligence.v1.ExplicitContentFrame", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="time_offset", - full_name="google.cloud.videointelligence.v1.ExplicitContentFrame.time_offset", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="pornography_likelihood", - full_name="google.cloud.videointelligence.v1.ExplicitContentFrame.pornography_likelihood", - index=1, - number=2, - type=14, - cpp_type=8, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2720, - serialized_end=2869, -) - - -_EXPLICITCONTENTANNOTATION = _descriptor.Descriptor( - name="ExplicitContentAnnotation", - full_name="google.cloud.videointelligence.v1.ExplicitContentAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="frames", - full_name="google.cloud.videointelligence.v1.ExplicitContentAnnotation.frames", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="version", - full_name="google.cloud.videointelligence.v1.ExplicitContentAnnotation.version", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2871, - serialized_end=2988, -) - - -_NORMALIZEDBOUNDINGBOX = _descriptor.Descriptor( - name="NormalizedBoundingBox", - full_name="google.cloud.videointelligence.v1.NormalizedBoundingBox", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="left", - full_name="google.cloud.videointelligence.v1.NormalizedBoundingBox.left", - index=0, - number=1, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="top", - full_name="google.cloud.videointelligence.v1.NormalizedBoundingBox.top", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="right", - full_name="google.cloud.videointelligence.v1.NormalizedBoundingBox.right", - index=2, - number=3, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="bottom", - full_name="google.cloud.videointelligence.v1.NormalizedBoundingBox.bottom", - index=3, - number=4, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2990, - serialized_end=3071, -) - - -_FACEDETECTIONANNOTATION = _descriptor.Descriptor( - name="FaceDetectionAnnotation", - full_name="google.cloud.videointelligence.v1.FaceDetectionAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="version", - full_name="google.cloud.videointelligence.v1.FaceDetectionAnnotation.version", - index=0, - number=5, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3073, - serialized_end=3115, -) - - -_PERSONDETECTIONANNOTATION = _descriptor.Descriptor( - name="PersonDetectionAnnotation", - full_name="google.cloud.videointelligence.v1.PersonDetectionAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="tracks", - full_name="google.cloud.videointelligence.v1.PersonDetectionAnnotation.tracks", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="version", - full_name="google.cloud.videointelligence.v1.PersonDetectionAnnotation.version", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3117, - serialized_end=3219, -) - - -_FACESEGMENT = _descriptor.Descriptor( - name="FaceSegment", - full_name="google.cloud.videointelligence.v1.FaceSegment", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="segment", - full_name="google.cloud.videointelligence.v1.FaceSegment.segment", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3221, - serialized_end=3300, -) - - -_FACEFRAME = _descriptor.Descriptor( - name="FaceFrame", - full_name="google.cloud.videointelligence.v1.FaceFrame", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="normalized_bounding_boxes", - full_name="google.cloud.videointelligence.v1.FaceFrame.normalized_bounding_boxes", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="time_offset", - full_name="google.cloud.videointelligence.v1.FaceFrame.time_offset", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=b"\030\001", - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3303, - serialized_end=3459, -) - - -_FACEANNOTATION = _descriptor.Descriptor( - name="FaceAnnotation", - full_name="google.cloud.videointelligence.v1.FaceAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="thumbnail", - full_name="google.cloud.videointelligence.v1.FaceAnnotation.thumbnail", - index=0, - number=1, - type=12, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"", - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segments", - full_name="google.cloud.videointelligence.v1.FaceAnnotation.segments", - index=1, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="frames", - full_name="google.cloud.videointelligence.v1.FaceAnnotation.frames", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=b"\030\001", - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3462, - serialized_end=3629, -) - - -_TIMESTAMPEDOBJECT = _descriptor.Descriptor( - name="TimestampedObject", - full_name="google.cloud.videointelligence.v1.TimestampedObject", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="normalized_bounding_box", - full_name="google.cloud.videointelligence.v1.TimestampedObject.normalized_bounding_box", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="time_offset", - full_name="google.cloud.videointelligence.v1.TimestampedObject.time_offset", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="attributes", - full_name="google.cloud.videointelligence.v1.TimestampedObject.attributes", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="landmarks", - full_name="google.cloud.videointelligence.v1.TimestampedObject.landmarks", - index=3, - number=4, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3632, - serialized_end=3946, -) - - -_TRACK = _descriptor.Descriptor( - name="Track", - full_name="google.cloud.videointelligence.v1.Track", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="segment", - full_name="google.cloud.videointelligence.v1.Track.segment", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="timestamped_objects", - full_name="google.cloud.videointelligence.v1.Track.timestamped_objects", - index=1, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="attributes", - full_name="google.cloud.videointelligence.v1.Track.attributes", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1.Track.confidence", - index=3, - number=4, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3949, - serialized_end=4209, -) - - -_DETECTEDATTRIBUTE = _descriptor.Descriptor( - name="DetectedAttribute", - full_name="google.cloud.videointelligence.v1.DetectedAttribute", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="name", - full_name="google.cloud.videointelligence.v1.DetectedAttribute.name", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1.DetectedAttribute.confidence", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="value", - full_name="google.cloud.videointelligence.v1.DetectedAttribute.value", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=4211, - serialized_end=4279, -) - - -_DETECTEDLANDMARK = _descriptor.Descriptor( - name="DetectedLandmark", - full_name="google.cloud.videointelligence.v1.DetectedLandmark", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="name", - full_name="google.cloud.videointelligence.v1.DetectedLandmark.name", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="point", - full_name="google.cloud.videointelligence.v1.DetectedLandmark.point", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1.DetectedLandmark.confidence", - index=2, - number=3, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=4281, - serialized_end=4401, -) - - -_VIDEOANNOTATIONRESULTS = _descriptor.Descriptor( - name="VideoAnnotationResults", - full_name="google.cloud.videointelligence.v1.VideoAnnotationResults", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="input_uri", - full_name="google.cloud.videointelligence.v1.VideoAnnotationResults.input_uri", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segment", - full_name="google.cloud.videointelligence.v1.VideoAnnotationResults.segment", - index=1, - number=10, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segment_label_annotations", - full_name="google.cloud.videointelligence.v1.VideoAnnotationResults.segment_label_annotations", - index=2, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segment_presence_label_annotations", - full_name="google.cloud.videointelligence.v1.VideoAnnotationResults.segment_presence_label_annotations", - index=3, - number=23, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="shot_label_annotations", - full_name="google.cloud.videointelligence.v1.VideoAnnotationResults.shot_label_annotations", - index=4, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="shot_presence_label_annotations", - full_name="google.cloud.videointelligence.v1.VideoAnnotationResults.shot_presence_label_annotations", - index=5, - number=24, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="frame_label_annotations", - full_name="google.cloud.videointelligence.v1.VideoAnnotationResults.frame_label_annotations", - index=6, - number=4, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="face_annotations", - full_name="google.cloud.videointelligence.v1.VideoAnnotationResults.face_annotations", - index=7, - number=5, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\030\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="face_detection_annotations", - full_name="google.cloud.videointelligence.v1.VideoAnnotationResults.face_detection_annotations", - index=8, - number=13, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="shot_annotations", - full_name="google.cloud.videointelligence.v1.VideoAnnotationResults.shot_annotations", - index=9, - number=6, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="explicit_annotation", - full_name="google.cloud.videointelligence.v1.VideoAnnotationResults.explicit_annotation", - index=10, - number=7, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="speech_transcriptions", - full_name="google.cloud.videointelligence.v1.VideoAnnotationResults.speech_transcriptions", - index=11, - number=11, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="text_annotations", - full_name="google.cloud.videointelligence.v1.VideoAnnotationResults.text_annotations", - index=12, - number=12, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="object_annotations", - full_name="google.cloud.videointelligence.v1.VideoAnnotationResults.object_annotations", - index=13, - number=14, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="logo_recognition_annotations", - full_name="google.cloud.videointelligence.v1.VideoAnnotationResults.logo_recognition_annotations", - index=14, - number=19, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="person_detection_annotations", - full_name="google.cloud.videointelligence.v1.VideoAnnotationResults.person_detection_annotations", - index=15, - number=20, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="error", - full_name="google.cloud.videointelligence.v1.VideoAnnotationResults.error", - index=16, - number=9, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=4404, - serialized_end=5789, -) - - -_ANNOTATEVIDEORESPONSE = _descriptor.Descriptor( - name="AnnotateVideoResponse", - full_name="google.cloud.videointelligence.v1.AnnotateVideoResponse", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="annotation_results", - full_name="google.cloud.videointelligence.v1.AnnotateVideoResponse.annotation_results", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=5791, - serialized_end=5901, -) - - -_VIDEOANNOTATIONPROGRESS = _descriptor.Descriptor( - name="VideoAnnotationProgress", - full_name="google.cloud.videointelligence.v1.VideoAnnotationProgress", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="input_uri", - full_name="google.cloud.videointelligence.v1.VideoAnnotationProgress.input_uri", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="progress_percent", - full_name="google.cloud.videointelligence.v1.VideoAnnotationProgress.progress_percent", - index=1, - number=2, - type=5, - cpp_type=1, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="start_time", - full_name="google.cloud.videointelligence.v1.VideoAnnotationProgress.start_time", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="update_time", - full_name="google.cloud.videointelligence.v1.VideoAnnotationProgress.update_time", - index=3, - number=4, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="feature", - full_name="google.cloud.videointelligence.v1.VideoAnnotationProgress.feature", - index=4, - number=5, - type=14, - cpp_type=8, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segment", - full_name="google.cloud.videointelligence.v1.VideoAnnotationProgress.segment", - index=5, - number=6, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=5904, - serialized_end=6198, -) - - -_ANNOTATEVIDEOPROGRESS = _descriptor.Descriptor( - name="AnnotateVideoProgress", - full_name="google.cloud.videointelligence.v1.AnnotateVideoProgress", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="annotation_progress", - full_name="google.cloud.videointelligence.v1.AnnotateVideoProgress.annotation_progress", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=6200, - serialized_end=6312, -) - - -_SPEECHTRANSCRIPTIONCONFIG = _descriptor.Descriptor( - name="SpeechTranscriptionConfig", - full_name="google.cloud.videointelligence.v1.SpeechTranscriptionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="language_code", - full_name="google.cloud.videointelligence.v1.SpeechTranscriptionConfig.language_code", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\002", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="max_alternatives", - full_name="google.cloud.videointelligence.v1.SpeechTranscriptionConfig.max_alternatives", - index=1, - number=2, - type=5, - cpp_type=1, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="filter_profanity", - full_name="google.cloud.videointelligence.v1.SpeechTranscriptionConfig.filter_profanity", - index=2, - number=3, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="speech_contexts", - full_name="google.cloud.videointelligence.v1.SpeechTranscriptionConfig.speech_contexts", - index=3, - number=4, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="enable_automatic_punctuation", - full_name="google.cloud.videointelligence.v1.SpeechTranscriptionConfig.enable_automatic_punctuation", - index=4, - number=5, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="audio_tracks", - full_name="google.cloud.videointelligence.v1.SpeechTranscriptionConfig.audio_tracks", - index=5, - number=6, - type=5, - cpp_type=1, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="enable_speaker_diarization", - full_name="google.cloud.videointelligence.v1.SpeechTranscriptionConfig.enable_speaker_diarization", - index=6, - number=7, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="diarization_speaker_count", - full_name="google.cloud.videointelligence.v1.SpeechTranscriptionConfig.diarization_speaker_count", - index=7, - number=8, - type=5, - cpp_type=1, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="enable_word_confidence", - full_name="google.cloud.videointelligence.v1.SpeechTranscriptionConfig.enable_word_confidence", - index=8, - number=9, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=6315, - serialized_end=6700, -) - - -_SPEECHCONTEXT = _descriptor.Descriptor( - name="SpeechContext", - full_name="google.cloud.videointelligence.v1.SpeechContext", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="phrases", - full_name="google.cloud.videointelligence.v1.SpeechContext.phrases", - index=0, - number=1, - type=9, - cpp_type=9, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=6702, - serialized_end=6739, -) - - -_SPEECHTRANSCRIPTION = _descriptor.Descriptor( - name="SpeechTranscription", - full_name="google.cloud.videointelligence.v1.SpeechTranscription", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="alternatives", - full_name="google.cloud.videointelligence.v1.SpeechTranscription.alternatives", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="language_code", - full_name="google.cloud.videointelligence.v1.SpeechTranscription.language_code", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\003", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=6742, - serialized_end=6878, -) - - -_SPEECHRECOGNITIONALTERNATIVE = _descriptor.Descriptor( - name="SpeechRecognitionAlternative", - full_name="google.cloud.videointelligence.v1.SpeechRecognitionAlternative", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="transcript", - full_name="google.cloud.videointelligence.v1.SpeechRecognitionAlternative.transcript", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1.SpeechRecognitionAlternative.confidence", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\003", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="words", - full_name="google.cloud.videointelligence.v1.SpeechRecognitionAlternative.words", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\003", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=6881, - serialized_end=7021, -) - - -_WORDINFO = _descriptor.Descriptor( - name="WordInfo", - full_name="google.cloud.videointelligence.v1.WordInfo", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="start_time", - full_name="google.cloud.videointelligence.v1.WordInfo.start_time", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="end_time", - full_name="google.cloud.videointelligence.v1.WordInfo.end_time", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="word", - full_name="google.cloud.videointelligence.v1.WordInfo.word", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1.WordInfo.confidence", - index=3, - number=4, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\003", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="speaker_tag", - full_name="google.cloud.videointelligence.v1.WordInfo.speaker_tag", - index=4, - number=5, - type=5, - cpp_type=1, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\003", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=7024, - serialized_end=7191, -) - - -_NORMALIZEDVERTEX = _descriptor.Descriptor( - name="NormalizedVertex", - full_name="google.cloud.videointelligence.v1.NormalizedVertex", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="x", - full_name="google.cloud.videointelligence.v1.NormalizedVertex.x", - index=0, - number=1, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="y", - full_name="google.cloud.videointelligence.v1.NormalizedVertex.y", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=7193, - serialized_end=7233, -) - - -_NORMALIZEDBOUNDINGPOLY = _descriptor.Descriptor( - name="NormalizedBoundingPoly", - full_name="google.cloud.videointelligence.v1.NormalizedBoundingPoly", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="vertices", - full_name="google.cloud.videointelligence.v1.NormalizedBoundingPoly.vertices", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=7235, - serialized_end=7330, -) - - -_TEXTSEGMENT = _descriptor.Descriptor( - name="TextSegment", - full_name="google.cloud.videointelligence.v1.TextSegment", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="segment", - full_name="google.cloud.videointelligence.v1.TextSegment.segment", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1.TextSegment.confidence", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="frames", - full_name="google.cloud.videointelligence.v1.TextSegment.frames", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=7333, - serialized_end=7494, -) - - -_TEXTFRAME = _descriptor.Descriptor( - name="TextFrame", - full_name="google.cloud.videointelligence.v1.TextFrame", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="rotated_bounding_box", - full_name="google.cloud.videointelligence.v1.TextFrame.rotated_bounding_box", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="time_offset", - full_name="google.cloud.videointelligence.v1.TextFrame.time_offset", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=7497, - serialized_end=7645, -) - - -_TEXTANNOTATION = _descriptor.Descriptor( - name="TextAnnotation", - full_name="google.cloud.videointelligence.v1.TextAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="text", - full_name="google.cloud.videointelligence.v1.TextAnnotation.text", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segments", - full_name="google.cloud.videointelligence.v1.TextAnnotation.segments", - index=1, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="version", - full_name="google.cloud.videointelligence.v1.TextAnnotation.version", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=7647, - serialized_end=7760, -) - - -_OBJECTTRACKINGFRAME = _descriptor.Descriptor( - name="ObjectTrackingFrame", - full_name="google.cloud.videointelligence.v1.ObjectTrackingFrame", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="normalized_bounding_box", - full_name="google.cloud.videointelligence.v1.ObjectTrackingFrame.normalized_bounding_box", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="time_offset", - full_name="google.cloud.videointelligence.v1.ObjectTrackingFrame.time_offset", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=7763, - serialized_end=7923, -) - - -_OBJECTTRACKINGANNOTATION = _descriptor.Descriptor( - name="ObjectTrackingAnnotation", - full_name="google.cloud.videointelligence.v1.ObjectTrackingAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="segment", - full_name="google.cloud.videointelligence.v1.ObjectTrackingAnnotation.segment", - index=0, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="track_id", - full_name="google.cloud.videointelligence.v1.ObjectTrackingAnnotation.track_id", - index=1, - number=5, - type=3, - cpp_type=2, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="entity", - full_name="google.cloud.videointelligence.v1.ObjectTrackingAnnotation.entity", - index=2, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1.ObjectTrackingAnnotation.confidence", - index=3, - number=4, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="frames", - full_name="google.cloud.videointelligence.v1.ObjectTrackingAnnotation.frames", - index=4, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="version", - full_name="google.cloud.videointelligence.v1.ObjectTrackingAnnotation.version", - index=5, - number=6, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[ - _descriptor.OneofDescriptor( - name="track_info", - full_name="google.cloud.videointelligence.v1.ObjectTrackingAnnotation.track_info", - index=0, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[], - ), - ], - serialized_start=7926, - serialized_end=8222, -) - - -_LOGORECOGNITIONANNOTATION = _descriptor.Descriptor( - name="LogoRecognitionAnnotation", - full_name="google.cloud.videointelligence.v1.LogoRecognitionAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="entity", - full_name="google.cloud.videointelligence.v1.LogoRecognitionAnnotation.entity", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="tracks", - full_name="google.cloud.videointelligence.v1.LogoRecognitionAnnotation.tracks", - index=1, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segments", - full_name="google.cloud.videointelligence.v1.LogoRecognitionAnnotation.segments", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=8225, - serialized_end=8436, -) - -_ANNOTATEVIDEOREQUEST.fields_by_name["features"].enum_type = _FEATURE -_ANNOTATEVIDEOREQUEST.fields_by_name["video_context"].message_type = _VIDEOCONTEXT -_VIDEOCONTEXT.fields_by_name["segments"].message_type = _VIDEOSEGMENT -_VIDEOCONTEXT.fields_by_name[ - "label_detection_config" -].message_type = _LABELDETECTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "shot_change_detection_config" -].message_type = _SHOTCHANGEDETECTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "explicit_content_detection_config" -].message_type = _EXPLICITCONTENTDETECTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "face_detection_config" -].message_type = _FACEDETECTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "speech_transcription_config" -].message_type = _SPEECHTRANSCRIPTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "text_detection_config" -].message_type = _TEXTDETECTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "person_detection_config" -].message_type = _PERSONDETECTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "object_tracking_config" -].message_type = _OBJECTTRACKINGCONFIG -_LABELDETECTIONCONFIG.fields_by_name[ - "label_detection_mode" -].enum_type = _LABELDETECTIONMODE -_VIDEOSEGMENT.fields_by_name[ - "start_time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_VIDEOSEGMENT.fields_by_name[ - "end_time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_LABELSEGMENT.fields_by_name["segment"].message_type = _VIDEOSEGMENT -_LABELFRAME.fields_by_name[ - "time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_LABELANNOTATION.fields_by_name["entity"].message_type = _ENTITY -_LABELANNOTATION.fields_by_name["category_entities"].message_type = _ENTITY -_LABELANNOTATION.fields_by_name["segments"].message_type = _LABELSEGMENT -_LABELANNOTATION.fields_by_name["frames"].message_type = _LABELFRAME -_EXPLICITCONTENTFRAME.fields_by_name[ - "time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_EXPLICITCONTENTFRAME.fields_by_name["pornography_likelihood"].enum_type = _LIKELIHOOD -_EXPLICITCONTENTANNOTATION.fields_by_name["frames"].message_type = _EXPLICITCONTENTFRAME -_PERSONDETECTIONANNOTATION.fields_by_name["tracks"].message_type = _TRACK -_FACESEGMENT.fields_by_name["segment"].message_type = _VIDEOSEGMENT -_FACEFRAME.fields_by_name[ - "normalized_bounding_boxes" -].message_type = _NORMALIZEDBOUNDINGBOX -_FACEFRAME.fields_by_name[ - "time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_FACEANNOTATION.fields_by_name["segments"].message_type = _FACESEGMENT -_FACEANNOTATION.fields_by_name["frames"].message_type = _FACEFRAME -_TIMESTAMPEDOBJECT.fields_by_name[ - "normalized_bounding_box" -].message_type = _NORMALIZEDBOUNDINGBOX -_TIMESTAMPEDOBJECT.fields_by_name[ - "time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_TIMESTAMPEDOBJECT.fields_by_name["attributes"].message_type = _DETECTEDATTRIBUTE -_TIMESTAMPEDOBJECT.fields_by_name["landmarks"].message_type = _DETECTEDLANDMARK -_TRACK.fields_by_name["segment"].message_type = _VIDEOSEGMENT -_TRACK.fields_by_name["timestamped_objects"].message_type = _TIMESTAMPEDOBJECT -_TRACK.fields_by_name["attributes"].message_type = _DETECTEDATTRIBUTE -_DETECTEDLANDMARK.fields_by_name["point"].message_type = _NORMALIZEDVERTEX -_VIDEOANNOTATIONRESULTS.fields_by_name["segment"].message_type = _VIDEOSEGMENT -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "segment_label_annotations" -].message_type = _LABELANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "segment_presence_label_annotations" -].message_type = _LABELANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "shot_label_annotations" -].message_type = _LABELANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "shot_presence_label_annotations" -].message_type = _LABELANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "frame_label_annotations" -].message_type = _LABELANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "face_annotations" -].message_type = _FACEANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "face_detection_annotations" -].message_type = _FACEDETECTIONANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name["shot_annotations"].message_type = _VIDEOSEGMENT -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "explicit_annotation" -].message_type = _EXPLICITCONTENTANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "speech_transcriptions" -].message_type = _SPEECHTRANSCRIPTION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "text_annotations" -].message_type = _TEXTANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "object_annotations" -].message_type = _OBJECTTRACKINGANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "logo_recognition_annotations" -].message_type = _LOGORECOGNITIONANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "person_detection_annotations" -].message_type = _PERSONDETECTIONANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "error" -].message_type = google_dot_rpc_dot_status__pb2._STATUS -_ANNOTATEVIDEORESPONSE.fields_by_name[ - "annotation_results" -].message_type = _VIDEOANNOTATIONRESULTS -_VIDEOANNOTATIONPROGRESS.fields_by_name[ - "start_time" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -_VIDEOANNOTATIONPROGRESS.fields_by_name[ - "update_time" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -_VIDEOANNOTATIONPROGRESS.fields_by_name["feature"].enum_type = _FEATURE -_VIDEOANNOTATIONPROGRESS.fields_by_name["segment"].message_type = _VIDEOSEGMENT -_ANNOTATEVIDEOPROGRESS.fields_by_name[ - "annotation_progress" -].message_type = _VIDEOANNOTATIONPROGRESS -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name[ - "speech_contexts" -].message_type = _SPEECHCONTEXT -_SPEECHTRANSCRIPTION.fields_by_name[ - "alternatives" -].message_type = _SPEECHRECOGNITIONALTERNATIVE -_SPEECHRECOGNITIONALTERNATIVE.fields_by_name["words"].message_type = _WORDINFO -_WORDINFO.fields_by_name[ - "start_time" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_WORDINFO.fields_by_name[ - "end_time" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_NORMALIZEDBOUNDINGPOLY.fields_by_name["vertices"].message_type = _NORMALIZEDVERTEX -_TEXTSEGMENT.fields_by_name["segment"].message_type = _VIDEOSEGMENT -_TEXTSEGMENT.fields_by_name["frames"].message_type = _TEXTFRAME -_TEXTFRAME.fields_by_name["rotated_bounding_box"].message_type = _NORMALIZEDBOUNDINGPOLY -_TEXTFRAME.fields_by_name[ - "time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_TEXTANNOTATION.fields_by_name["segments"].message_type = _TEXTSEGMENT -_OBJECTTRACKINGFRAME.fields_by_name[ - "normalized_bounding_box" -].message_type = _NORMALIZEDBOUNDINGBOX -_OBJECTTRACKINGFRAME.fields_by_name[ - "time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_OBJECTTRACKINGANNOTATION.fields_by_name["segment"].message_type = _VIDEOSEGMENT -_OBJECTTRACKINGANNOTATION.fields_by_name["entity"].message_type = _ENTITY -_OBJECTTRACKINGANNOTATION.fields_by_name["frames"].message_type = _OBJECTTRACKINGFRAME -_OBJECTTRACKINGANNOTATION.oneofs_by_name["track_info"].fields.append( - _OBJECTTRACKINGANNOTATION.fields_by_name["segment"] -) -_OBJECTTRACKINGANNOTATION.fields_by_name[ - "segment" -].containing_oneof = _OBJECTTRACKINGANNOTATION.oneofs_by_name["track_info"] -_OBJECTTRACKINGANNOTATION.oneofs_by_name["track_info"].fields.append( - _OBJECTTRACKINGANNOTATION.fields_by_name["track_id"] -) -_OBJECTTRACKINGANNOTATION.fields_by_name[ - "track_id" -].containing_oneof = _OBJECTTRACKINGANNOTATION.oneofs_by_name["track_info"] -_LOGORECOGNITIONANNOTATION.fields_by_name["entity"].message_type = _ENTITY -_LOGORECOGNITIONANNOTATION.fields_by_name["tracks"].message_type = _TRACK -_LOGORECOGNITIONANNOTATION.fields_by_name["segments"].message_type = _VIDEOSEGMENT -DESCRIPTOR.message_types_by_name["AnnotateVideoRequest"] = _ANNOTATEVIDEOREQUEST -DESCRIPTOR.message_types_by_name["VideoContext"] = _VIDEOCONTEXT -DESCRIPTOR.message_types_by_name["LabelDetectionConfig"] = _LABELDETECTIONCONFIG -DESCRIPTOR.message_types_by_name[ - "ShotChangeDetectionConfig" -] = _SHOTCHANGEDETECTIONCONFIG -DESCRIPTOR.message_types_by_name["ObjectTrackingConfig"] = _OBJECTTRACKINGCONFIG -DESCRIPTOR.message_types_by_name["FaceDetectionConfig"] = _FACEDETECTIONCONFIG -DESCRIPTOR.message_types_by_name["PersonDetectionConfig"] = _PERSONDETECTIONCONFIG -DESCRIPTOR.message_types_by_name[ - "ExplicitContentDetectionConfig" -] = _EXPLICITCONTENTDETECTIONCONFIG -DESCRIPTOR.message_types_by_name["TextDetectionConfig"] = _TEXTDETECTIONCONFIG -DESCRIPTOR.message_types_by_name["VideoSegment"] = _VIDEOSEGMENT -DESCRIPTOR.message_types_by_name["LabelSegment"] = _LABELSEGMENT -DESCRIPTOR.message_types_by_name["LabelFrame"] = _LABELFRAME -DESCRIPTOR.message_types_by_name["Entity"] = _ENTITY -DESCRIPTOR.message_types_by_name["LabelAnnotation"] = _LABELANNOTATION -DESCRIPTOR.message_types_by_name["ExplicitContentFrame"] = _EXPLICITCONTENTFRAME -DESCRIPTOR.message_types_by_name[ - "ExplicitContentAnnotation" -] = _EXPLICITCONTENTANNOTATION -DESCRIPTOR.message_types_by_name["NormalizedBoundingBox"] = _NORMALIZEDBOUNDINGBOX -DESCRIPTOR.message_types_by_name["FaceDetectionAnnotation"] = _FACEDETECTIONANNOTATION -DESCRIPTOR.message_types_by_name[ - "PersonDetectionAnnotation" -] = _PERSONDETECTIONANNOTATION -DESCRIPTOR.message_types_by_name["FaceSegment"] = _FACESEGMENT -DESCRIPTOR.message_types_by_name["FaceFrame"] = _FACEFRAME -DESCRIPTOR.message_types_by_name["FaceAnnotation"] = _FACEANNOTATION -DESCRIPTOR.message_types_by_name["TimestampedObject"] = _TIMESTAMPEDOBJECT -DESCRIPTOR.message_types_by_name["Track"] = _TRACK -DESCRIPTOR.message_types_by_name["DetectedAttribute"] = _DETECTEDATTRIBUTE -DESCRIPTOR.message_types_by_name["DetectedLandmark"] = _DETECTEDLANDMARK -DESCRIPTOR.message_types_by_name["VideoAnnotationResults"] = _VIDEOANNOTATIONRESULTS -DESCRIPTOR.message_types_by_name["AnnotateVideoResponse"] = _ANNOTATEVIDEORESPONSE -DESCRIPTOR.message_types_by_name["VideoAnnotationProgress"] = _VIDEOANNOTATIONPROGRESS -DESCRIPTOR.message_types_by_name["AnnotateVideoProgress"] = _ANNOTATEVIDEOPROGRESS -DESCRIPTOR.message_types_by_name[ - "SpeechTranscriptionConfig" -] = _SPEECHTRANSCRIPTIONCONFIG -DESCRIPTOR.message_types_by_name["SpeechContext"] = _SPEECHCONTEXT -DESCRIPTOR.message_types_by_name["SpeechTranscription"] = _SPEECHTRANSCRIPTION -DESCRIPTOR.message_types_by_name[ - "SpeechRecognitionAlternative" -] = _SPEECHRECOGNITIONALTERNATIVE -DESCRIPTOR.message_types_by_name["WordInfo"] = _WORDINFO -DESCRIPTOR.message_types_by_name["NormalizedVertex"] = _NORMALIZEDVERTEX -DESCRIPTOR.message_types_by_name["NormalizedBoundingPoly"] = _NORMALIZEDBOUNDINGPOLY -DESCRIPTOR.message_types_by_name["TextSegment"] = _TEXTSEGMENT -DESCRIPTOR.message_types_by_name["TextFrame"] = _TEXTFRAME -DESCRIPTOR.message_types_by_name["TextAnnotation"] = _TEXTANNOTATION -DESCRIPTOR.message_types_by_name["ObjectTrackingFrame"] = _OBJECTTRACKINGFRAME -DESCRIPTOR.message_types_by_name["ObjectTrackingAnnotation"] = _OBJECTTRACKINGANNOTATION -DESCRIPTOR.message_types_by_name[ - "LogoRecognitionAnnotation" -] = _LOGORECOGNITIONANNOTATION -DESCRIPTOR.enum_types_by_name["Feature"] = _FEATURE -DESCRIPTOR.enum_types_by_name["LabelDetectionMode"] = _LABELDETECTIONMODE -DESCRIPTOR.enum_types_by_name["Likelihood"] = _LIKELIHOOD -_sym_db.RegisterFileDescriptor(DESCRIPTOR) - -AnnotateVideoRequest = _reflection.GeneratedProtocolMessageType( - "AnnotateVideoRequest", - (_message.Message,), - { - "DESCRIPTOR": _ANNOTATEVIDEOREQUEST, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Video annotation request. - - - Attributes: - input_uri: - Input video location. Currently, only `Cloud Storage - `__ URIs are supported. - URIs must be specified in the following format: ``gs://bucket- - id/object-id`` (other URI formats return [google.rpc.Code.INVA - LID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For more - information, see `Request URIs - `__. - To identify multiple videos, a video URI may include wildcards - in the ``object-id``. Supported wildcards: ’*’ to match 0 or - more characters; ‘?’ to match 1 character. If unset, the input - video should be embedded in the request as ``input_content``. - If set, ``input_content`` must be unset. - input_content: - The video data bytes. If unset, the input video(s) should be - specified via the ``input_uri``. If set, ``input_uri`` must be - unset. - features: - Required. Requested video annotation features. - video_context: - Additional video context and/or feature-specific parameters. - output_uri: - Optional. Location where the output (in JSON format) should be - stored. Currently, only `Cloud Storage - `__ URIs are supported. - These must be specified in the following format: - ``gs://bucket-id/object-id`` (other URI formats return [google - .rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]) - . For more information, see `Request URIs - `__. - location_id: - Optional. Cloud region where annotation should take place. - Supported cloud regions are: ``us-east1``, ``us-west1``, - ``europe-west1``, ``asia-east1``. If no region is specified, - the region will be determined based on video file location. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.AnnotateVideoRequest) - }, -) -_sym_db.RegisterMessage(AnnotateVideoRequest) - -VideoContext = _reflection.GeneratedProtocolMessageType( - "VideoContext", - (_message.Message,), - { - "DESCRIPTOR": _VIDEOCONTEXT, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Video context and/or feature-specific parameters. - - - Attributes: - segments: - Video segments to annotate. The segments may overlap and are - not required to be contiguous or span the whole video. If - unspecified, each video is treated as a single segment. - label_detection_config: - Config for LABEL_DETECTION. - shot_change_detection_config: - Config for SHOT_CHANGE_DETECTION. - explicit_content_detection_config: - Config for EXPLICIT_CONTENT_DETECTION. - face_detection_config: - Config for FACE_DETECTION. - speech_transcription_config: - Config for SPEECH_TRANSCRIPTION. - text_detection_config: - Config for TEXT_DETECTION. - person_detection_config: - Config for PERSON_DETECTION. - object_tracking_config: - Config for OBJECT_TRACKING. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.VideoContext) - }, -) -_sym_db.RegisterMessage(VideoContext) - -LabelDetectionConfig = _reflection.GeneratedProtocolMessageType( - "LabelDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _LABELDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Config for LABEL_DETECTION. - - - Attributes: - label_detection_mode: - What labels should be detected with LABEL_DETECTION, in - addition to video-level labels or segment-level labels. If - unspecified, defaults to ``SHOT_MODE``. - stationary_camera: - Whether the video has been shot from a stationary (i.e., non- - moving) camera. When set to true, might improve detection - accuracy for moving objects. Should be used with - ``SHOT_AND_FRAME_MODE`` enabled. - model: - Model to use for label detection. Supported values: - “builtin/stable” (the default if unset) and “builtin/latest”. - frame_confidence_threshold: - The confidence threshold we perform filtering on the labels - from frame-level detection. If not set, it is set to 0.4 by - default. The valid range for this threshold is [0.1, 0.9]. Any - value set outside of this range will be clipped. Note: For - best results, follow the default threshold. We will update the - default threshold everytime when we release a new model. - video_confidence_threshold: - The confidence threshold we perform filtering on the labels - from video-level and shot-level detections. If not set, it’s - set to 0.3 by default. The valid range for this threshold is - [0.1, 0.9]. Any value set outside of this range will be - clipped. Note: For best results, follow the default threshold. - We will update the default threshold everytime when we release - a new model. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.LabelDetectionConfig) - }, -) -_sym_db.RegisterMessage(LabelDetectionConfig) - -ShotChangeDetectionConfig = _reflection.GeneratedProtocolMessageType( - "ShotChangeDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _SHOTCHANGEDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Config for SHOT_CHANGE_DETECTION. - - - Attributes: - model: - Model to use for shot change detection. Supported values: - “builtin/stable” (the default if unset) and “builtin/latest”. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.ShotChangeDetectionConfig) - }, -) -_sym_db.RegisterMessage(ShotChangeDetectionConfig) - -ObjectTrackingConfig = _reflection.GeneratedProtocolMessageType( - "ObjectTrackingConfig", - (_message.Message,), - { - "DESCRIPTOR": _OBJECTTRACKINGCONFIG, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Config for OBJECT_TRACKING. - - - Attributes: - model: - Model to use for object tracking. Supported values: - “builtin/stable” (the default if unset) and “builtin/latest”. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.ObjectTrackingConfig) - }, -) -_sym_db.RegisterMessage(ObjectTrackingConfig) - -FaceDetectionConfig = _reflection.GeneratedProtocolMessageType( - "FaceDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _FACEDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Config for FACE_DETECTION. - - - Attributes: - model: - Model to use for face detection. Supported values: - “builtin/stable” (the default if unset) and “builtin/latest”. - include_bounding_boxes: - Whether bounding boxes are included in the face annotation - output. - include_attributes: - Whether to enable face attributes detection, such as glasses, - dark_glasses, mouth_open etc. Ignored if - ‘include_bounding_boxes’ is set to false. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.FaceDetectionConfig) - }, -) -_sym_db.RegisterMessage(FaceDetectionConfig) - -PersonDetectionConfig = _reflection.GeneratedProtocolMessageType( - "PersonDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _PERSONDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Config for PERSON_DETECTION. - - - Attributes: - include_bounding_boxes: - Whether bounding boxes are included in the person detection - annotation output. - include_pose_landmarks: - Whether to enable pose landmarks detection. Ignored if - ‘include_bounding_boxes’ is set to false. - include_attributes: - Whether to enable person attributes detection, such as cloth - color (black, blue, etc), type (coat, dress, etc), pattern - (plain, floral, etc), hair, etc. Ignored if - ‘include_bounding_boxes’ is set to false. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.PersonDetectionConfig) - }, -) -_sym_db.RegisterMessage(PersonDetectionConfig) - -ExplicitContentDetectionConfig = _reflection.GeneratedProtocolMessageType( - "ExplicitContentDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _EXPLICITCONTENTDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Config for EXPLICIT_CONTENT_DETECTION. - - - Attributes: - model: - Model to use for explicit content detection. Supported values: - “builtin/stable” (the default if unset) and “builtin/latest”. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.ExplicitContentDetectionConfig) - }, -) -_sym_db.RegisterMessage(ExplicitContentDetectionConfig) - -TextDetectionConfig = _reflection.GeneratedProtocolMessageType( - "TextDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _TEXTDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Config for TEXT_DETECTION. - - - Attributes: - language_hints: - Language hint can be specified if the language to be detected - is known a priori. It can increase the accuracy of the - detection. Language hint must be language code in BCP-47 - format. Automatic language detection is performed if no hint - is provided. - model: - Model to use for text detection. Supported values: - “builtin/stable” (the default if unset) and “builtin/latest”. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.TextDetectionConfig) - }, -) -_sym_db.RegisterMessage(TextDetectionConfig) - -VideoSegment = _reflection.GeneratedProtocolMessageType( - "VideoSegment", - (_message.Message,), - { - "DESCRIPTOR": _VIDEOSEGMENT, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Video segment. - - - Attributes: - start_time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the start of the segment (inclusive). - end_time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the end of the segment (inclusive). - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.VideoSegment) - }, -) -_sym_db.RegisterMessage(VideoSegment) - -LabelSegment = _reflection.GeneratedProtocolMessageType( - "LabelSegment", - (_message.Message,), - { - "DESCRIPTOR": _LABELSEGMENT, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Video segment level annotation results for label detection. - - - Attributes: - segment: - Video segment where a label was detected. - confidence: - Confidence that the label is accurate. Range: [0, 1]. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.LabelSegment) - }, -) -_sym_db.RegisterMessage(LabelSegment) - -LabelFrame = _reflection.GeneratedProtocolMessageType( - "LabelFrame", - (_message.Message,), - { - "DESCRIPTOR": _LABELFRAME, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Video frame level annotation results for label detection. - - - Attributes: - time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the video frame for this location. - confidence: - Confidence that the label is accurate. Range: [0, 1]. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.LabelFrame) - }, -) -_sym_db.RegisterMessage(LabelFrame) - -Entity = _reflection.GeneratedProtocolMessageType( - "Entity", - (_message.Message,), - { - "DESCRIPTOR": _ENTITY, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Detected entity from video analysis. - - - Attributes: - entity_id: - Opaque entity ID. Some IDs may be available in `Google - Knowledge Graph Search API - `__. - description: - Textual description, e.g., ``Fixed-gear bicycle``. - language_code: - Language code for ``description`` in BCP-47 format. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.Entity) - }, -) -_sym_db.RegisterMessage(Entity) - -LabelAnnotation = _reflection.GeneratedProtocolMessageType( - "LabelAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _LABELANNOTATION, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Label annotation. - - - Attributes: - entity: - Detected entity. - category_entities: - Common categories for the detected entity. For example, when - the label is ``Terrier``, the category is likely ``dog``. And - in some cases there might be more than one categories e.g., - ``Terrier`` could also be a ``pet``. - segments: - All video segments where a label was detected. - frames: - All video frames where a label was detected. - version: - Feature version. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.LabelAnnotation) - }, -) -_sym_db.RegisterMessage(LabelAnnotation) - -ExplicitContentFrame = _reflection.GeneratedProtocolMessageType( - "ExplicitContentFrame", - (_message.Message,), - { - "DESCRIPTOR": _EXPLICITCONTENTFRAME, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Video frame level annotation results for explicit content. - - - Attributes: - time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the video frame for this location. - pornography_likelihood: - Likelihood of the pornography content.. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.ExplicitContentFrame) - }, -) -_sym_db.RegisterMessage(ExplicitContentFrame) - -ExplicitContentAnnotation = _reflection.GeneratedProtocolMessageType( - "ExplicitContentAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _EXPLICITCONTENTANNOTATION, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Explicit content annotation (based on per-frame visual signals only). - If no explicit content has been detected in a frame, no annotations - are present for that frame. - - - Attributes: - frames: - All video frames where explicit content was detected. - version: - Feature version. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.ExplicitContentAnnotation) - }, -) -_sym_db.RegisterMessage(ExplicitContentAnnotation) - -NormalizedBoundingBox = _reflection.GeneratedProtocolMessageType( - "NormalizedBoundingBox", - (_message.Message,), - { - "DESCRIPTOR": _NORMALIZEDBOUNDINGBOX, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Normalized bounding box. The normalized vertex coordinates are - relative to the original image. Range: [0, 1]. - - - Attributes: - left: - Left X coordinate. - top: - Top Y coordinate. - right: - Right X coordinate. - bottom: - Bottom Y coordinate. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.NormalizedBoundingBox) - }, -) -_sym_db.RegisterMessage(NormalizedBoundingBox) - -FaceDetectionAnnotation = _reflection.GeneratedProtocolMessageType( - "FaceDetectionAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _FACEDETECTIONANNOTATION, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Face detection annotation. - - - Attributes: - version: - Feature version. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.FaceDetectionAnnotation) - }, -) -_sym_db.RegisterMessage(FaceDetectionAnnotation) - -PersonDetectionAnnotation = _reflection.GeneratedProtocolMessageType( - "PersonDetectionAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _PERSONDETECTIONANNOTATION, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Person detection annotation per video. - - - Attributes: - tracks: - The detected tracks of a person. - version: - Feature version. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.PersonDetectionAnnotation) - }, -) -_sym_db.RegisterMessage(PersonDetectionAnnotation) - -FaceSegment = _reflection.GeneratedProtocolMessageType( - "FaceSegment", - (_message.Message,), - { - "DESCRIPTOR": _FACESEGMENT, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Video segment level annotation results for face detection. - - - Attributes: - segment: - Video segment where a face was detected. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.FaceSegment) - }, -) -_sym_db.RegisterMessage(FaceSegment) - -FaceFrame = _reflection.GeneratedProtocolMessageType( - "FaceFrame", - (_message.Message,), - { - "DESCRIPTOR": _FACEFRAME, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Deprecated. No effect. - - - Attributes: - normalized_bounding_boxes: - Normalized Bounding boxes in a frame. There can be more than - one boxes if the same face is detected in multiple locations - within the current frame. - time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the video frame for this location. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.FaceFrame) - }, -) -_sym_db.RegisterMessage(FaceFrame) - -FaceAnnotation = _reflection.GeneratedProtocolMessageType( - "FaceAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _FACEANNOTATION, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Deprecated. No effect. - - - Attributes: - thumbnail: - Thumbnail of a representative face view (in JPEG format). - segments: - All video segments where a face was detected. - frames: - All video frames where a face was detected. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.FaceAnnotation) - }, -) -_sym_db.RegisterMessage(FaceAnnotation) - -TimestampedObject = _reflection.GeneratedProtocolMessageType( - "TimestampedObject", - (_message.Message,), - { - "DESCRIPTOR": _TIMESTAMPEDOBJECT, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """For tracking related features. An object at time_offset with - attributes, and located with normalized_bounding_box. - - - Attributes: - normalized_bounding_box: - Normalized Bounding box in a frame, where the object is - located. - time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the video frame for this object. - attributes: - Optional. The attributes of the object in the bounding box. - landmarks: - Optional. The detected landmarks. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.TimestampedObject) - }, -) -_sym_db.RegisterMessage(TimestampedObject) - -Track = _reflection.GeneratedProtocolMessageType( - "Track", - (_message.Message,), - { - "DESCRIPTOR": _TRACK, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """A track of an object instance. - - - Attributes: - segment: - Video segment of a track. - timestamped_objects: - The object with timestamp and attributes per frame in the - track. - attributes: - Optional. Attributes in the track level. - confidence: - Optional. The confidence score of the tracked object. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.Track) - }, -) -_sym_db.RegisterMessage(Track) - -DetectedAttribute = _reflection.GeneratedProtocolMessageType( - "DetectedAttribute", - (_message.Message,), - { - "DESCRIPTOR": _DETECTEDATTRIBUTE, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """A generic detected attribute represented by name in string format. - - - Attributes: - name: - The name of the attribute, for example, glasses, dark_glasses, - mouth_open. A full list of supported type names will be - provided in the document. - confidence: - Detected attribute confidence. Range [0, 1]. - value: - Text value of the detection result. For example, the value for - “HairColor” can be “black”, “blonde”, etc. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.DetectedAttribute) - }, -) -_sym_db.RegisterMessage(DetectedAttribute) - -DetectedLandmark = _reflection.GeneratedProtocolMessageType( - "DetectedLandmark", - (_message.Message,), - { - "DESCRIPTOR": _DETECTEDLANDMARK, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """A generic detected landmark represented by name in string format and a - 2D location. - - - Attributes: - name: - The name of this landmark, for example, left_hand, - right_shoulder. - point: - The 2D point of the detected landmark using the normalized - image coordindate system. The normalized coordinates have the - range from 0 to 1. - confidence: - The confidence score of the detected landmark. Range [0, 1]. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.DetectedLandmark) - }, -) -_sym_db.RegisterMessage(DetectedLandmark) - -VideoAnnotationResults = _reflection.GeneratedProtocolMessageType( - "VideoAnnotationResults", - (_message.Message,), - { - "DESCRIPTOR": _VIDEOANNOTATIONRESULTS, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Annotation results for a single video. - - - Attributes: - input_uri: - Video file location in `Cloud Storage - `__. - segment: - Video segment on which the annotation is run. - segment_label_annotations: - Topical label annotations on video level or user-specified - segment level. There is exactly one element for each unique - label. - segment_presence_label_annotations: - Presence label annotations on video level or user-specified - segment level. There is exactly one element for each unique - label. Compared to the existing topical - ``segment_label_annotations``, this field presents more fine- - grained, segment-level labels detected in video content and is - made available only when the client sets - ``LabelDetectionConfig.model`` to “builtin/latest” in the - request. - shot_label_annotations: - Topical label annotations on shot level. There is exactly one - element for each unique label. - shot_presence_label_annotations: - Presence label annotations on shot level. There is exactly one - element for each unique label. Compared to the existing - topical ``shot_label_annotations``, this field presents more - fine-grained, shot-level labels detected in video content and - is made available only when the client sets - ``LabelDetectionConfig.model`` to “builtin/latest” in the - request. - frame_label_annotations: - Label annotations on frame level. There is exactly one element - for each unique label. - face_annotations: - Deprecated. Please use ``face_detection_annotations`` instead. - face_detection_annotations: - Face detection annotations. - shot_annotations: - Shot annotations. Each shot is represented as a video segment. - explicit_annotation: - Explicit content annotation. - speech_transcriptions: - Speech transcription. - text_annotations: - OCR text detection and tracking. Annotations for list of - detected text snippets. Each will have list of frame - information associated with it. - object_annotations: - Annotations for list of objects detected and tracked in video. - logo_recognition_annotations: - Annotations for list of logos detected, tracked and recognized - in video. - person_detection_annotations: - Person detection annotations. - error: - If set, indicates an error. Note that for a single - ``AnnotateVideoRequest`` some videos may succeed and some may - fail. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.VideoAnnotationResults) - }, -) -_sym_db.RegisterMessage(VideoAnnotationResults) - -AnnotateVideoResponse = _reflection.GeneratedProtocolMessageType( - "AnnotateVideoResponse", - (_message.Message,), - { - "DESCRIPTOR": _ANNOTATEVIDEORESPONSE, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Video annotation response. Included in the ``response`` field of the - ``Operation`` returned by the ``GetOperation`` call of the - ``google::longrunning::Operations`` service. - - - Attributes: - annotation_results: - Annotation results for all videos specified in - ``AnnotateVideoRequest``. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.AnnotateVideoResponse) - }, -) -_sym_db.RegisterMessage(AnnotateVideoResponse) - -VideoAnnotationProgress = _reflection.GeneratedProtocolMessageType( - "VideoAnnotationProgress", - (_message.Message,), - { - "DESCRIPTOR": _VIDEOANNOTATIONPROGRESS, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Annotation progress for a single video. - - - Attributes: - input_uri: - Video file location in `Cloud Storage - `__. - progress_percent: - Approximate percentage processed thus far. Guaranteed to be - 100 when fully processed. - start_time: - Time when the request was received. - update_time: - Time of the most recent update. - feature: - Specifies which feature is being tracked if the request - contains more than one feature. - segment: - Specifies which segment is being tracked if the request - contains more than one segment. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.VideoAnnotationProgress) - }, -) -_sym_db.RegisterMessage(VideoAnnotationProgress) - -AnnotateVideoProgress = _reflection.GeneratedProtocolMessageType( - "AnnotateVideoProgress", - (_message.Message,), - { - "DESCRIPTOR": _ANNOTATEVIDEOPROGRESS, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Video annotation progress. Included in the ``metadata`` field of the - ``Operation`` returned by the ``GetOperation`` call of the - ``google::longrunning::Operations`` service. - - - Attributes: - annotation_progress: - Progress metadata for all videos specified in - ``AnnotateVideoRequest``. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.AnnotateVideoProgress) - }, -) -_sym_db.RegisterMessage(AnnotateVideoProgress) - -SpeechTranscriptionConfig = _reflection.GeneratedProtocolMessageType( - "SpeechTranscriptionConfig", - (_message.Message,), - { - "DESCRIPTOR": _SPEECHTRANSCRIPTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Config for SPEECH_TRANSCRIPTION. - - - Attributes: - language_code: - Required. *Required* The language of the supplied audio as a - `BCP-47 `__ - language tag. Example: “en-US”. See `Language Support - `__ for a list - of the currently supported language codes. - max_alternatives: - Optional. Maximum number of recognition hypotheses to be - returned. Specifically, the maximum number of - ``SpeechRecognitionAlternative`` messages within each - ``SpeechTranscription``. The server may return fewer than - ``max_alternatives``. Valid values are ``0``-``30``. A value - of ``0`` or ``1`` will return a maximum of one. If omitted, - will return a maximum of one. - filter_profanity: - Optional. If set to ``true``, the server will attempt to - filter out profanities, replacing all but the initial - character in each filtered word with asterisks, e.g. "f***". - If set to ``false`` or omitted, profanities won’t be filtered - out. - speech_contexts: - Optional. A means to provide context to assist the speech - recognition. - enable_automatic_punctuation: - Optional. If ‘true’, adds punctuation to recognition result - hypotheses. This feature is only available in select - languages. Setting this for requests in other languages has no - effect at all. The default ‘false’ value does not add - punctuation to result hypotheses. NOTE: “This is currently - offered as an experimental service, complimentary to all - users. In the future this may be exclusively available as a - premium feature.” - audio_tracks: - Optional. For file formats, such as MXF or MKV, supporting - multiple audio tracks, specify up to two tracks. Default: - track 0. - enable_speaker_diarization: - Optional. If ‘true’, enables speaker detection for each - recognized word in the top alternative of the recognition - result using a speaker_tag provided in the WordInfo. Note: - When this is true, we send all the words from the beginning of - the audio for the top alternative in every consecutive - response. This is done in order to improve our speaker tags as - our models learn to identify the speakers in the conversation - over time. - diarization_speaker_count: - Optional. If set, specifies the estimated number of speakers - in the conversation. If not set, defaults to ‘2’. Ignored - unless enable_speaker_diarization is set to true. - enable_word_confidence: - Optional. If ``true``, the top result includes a list of words - and the confidence for those words. If ``false``, no word- - level confidence information is returned. The default is - ``false``. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.SpeechTranscriptionConfig) - }, -) -_sym_db.RegisterMessage(SpeechTranscriptionConfig) - -SpeechContext = _reflection.GeneratedProtocolMessageType( - "SpeechContext", - (_message.Message,), - { - "DESCRIPTOR": _SPEECHCONTEXT, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Provides “hints” to the speech recognizer to favor specific words and - phrases in the results. - - - Attributes: - phrases: - Optional. A list of strings containing words and phrases - “hints” so that the speech recognition is more likely to - recognize them. This can be used to improve the accuracy for - specific words and phrases, for example, if specific commands - are typically spoken by the user. This can also be used to add - additional words to the vocabulary of the recognizer. See - `usage limits - `__. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.SpeechContext) - }, -) -_sym_db.RegisterMessage(SpeechContext) - -SpeechTranscription = _reflection.GeneratedProtocolMessageType( - "SpeechTranscription", - (_message.Message,), - { - "DESCRIPTOR": _SPEECHTRANSCRIPTION, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """A speech recognition result corresponding to a portion of the audio. - - - Attributes: - alternatives: - May contain one or more recognition hypotheses (up to the - maximum specified in ``max_alternatives``). These alternatives - are ordered in terms of accuracy, with the top (first) - alternative being the most probable, as ranked by the - recognizer. - language_code: - Output only. The `BCP-47 `__ language tag of the language - in this result. This language code was detected to have the - most likelihood of being spoken in the audio. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.SpeechTranscription) - }, -) -_sym_db.RegisterMessage(SpeechTranscription) - -SpeechRecognitionAlternative = _reflection.GeneratedProtocolMessageType( - "SpeechRecognitionAlternative", - (_message.Message,), - { - "DESCRIPTOR": _SPEECHRECOGNITIONALTERNATIVE, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Alternative hypotheses (a.k.a. n-best list). - - - Attributes: - transcript: - Transcript text representing the words that the user spoke. - confidence: - Output only. The confidence estimate between 0.0 and 1.0. A - higher number indicates an estimated greater likelihood that - the recognized words are correct. This field is set only for - the top alternative. This field is not guaranteed to be - accurate and users should not rely on it to be always - provided. The default of 0.0 is a sentinel value indicating - ``confidence`` was not set. - words: - Output only. A list of word-specific information for each - recognized word. Note: When ``enable_speaker_diarization`` is - set to true, you will see all the words from the beginning of - the audio. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.SpeechRecognitionAlternative) - }, -) -_sym_db.RegisterMessage(SpeechRecognitionAlternative) - -WordInfo = _reflection.GeneratedProtocolMessageType( - "WordInfo", - (_message.Message,), - { - "DESCRIPTOR": _WORDINFO, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Word-specific information for recognized words. Word information is - only included in the response when certain request parameters are set, - such as ``enable_word_time_offsets``. - - - Attributes: - start_time: - Time offset relative to the beginning of the audio, and - corresponding to the start of the spoken word. This field is - only set if ``enable_word_time_offsets=true`` and only in the - top hypothesis. This is an experimental feature and the - accuracy of the time offset can vary. - end_time: - Time offset relative to the beginning of the audio, and - corresponding to the end of the spoken word. This field is - only set if ``enable_word_time_offsets=true`` and only in the - top hypothesis. This is an experimental feature and the - accuracy of the time offset can vary. - word: - The word corresponding to this set of information. - confidence: - Output only. The confidence estimate between 0.0 and 1.0. A - higher number indicates an estimated greater likelihood that - the recognized words are correct. This field is set only for - the top alternative. This field is not guaranteed to be - accurate and users should not rely on it to be always - provided. The default of 0.0 is a sentinel value indicating - ``confidence`` was not set. - speaker_tag: - Output only. A distinct integer value is assigned for every - speaker within the audio. This field specifies which one of - those speakers was detected to have spoken this word. Value - ranges from 1 up to diarization_speaker_count, and is only set - if speaker diarization is enabled. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.WordInfo) - }, -) -_sym_db.RegisterMessage(WordInfo) - -NormalizedVertex = _reflection.GeneratedProtocolMessageType( - "NormalizedVertex", - (_message.Message,), - { - "DESCRIPTOR": _NORMALIZEDVERTEX, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """X coordinate. - - - Attributes: - y: - Y coordinate. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.NormalizedVertex) - }, -) -_sym_db.RegisterMessage(NormalizedVertex) - -NormalizedBoundingPoly = _reflection.GeneratedProtocolMessageType( - "NormalizedBoundingPoly", - (_message.Message,), - { - "DESCRIPTOR": _NORMALIZEDBOUNDINGPOLY, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Normalized bounding polygon for text (that might not be aligned with - axis). Contains list of the corner points in clockwise order starting - from top-left corner. For example, for a rectangular bounding box: - When the text is horizontal it might look like: 0—-1 \| \| 3—-2 When - it’s clockwise rotated 180 degrees around the top-left corner it - becomes: 2—-3 \| \| 1—-0 and the vertex order will still be (0, 1, 2, - 3). Note that values can be less than 0, or greater than 1 due to - trignometric calculations for location of the box. - - - Attributes: - vertices: - Normalized vertices of the bounding polygon. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.NormalizedBoundingPoly) - }, -) -_sym_db.RegisterMessage(NormalizedBoundingPoly) - -TextSegment = _reflection.GeneratedProtocolMessageType( - "TextSegment", - (_message.Message,), - { - "DESCRIPTOR": _TEXTSEGMENT, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Video segment level annotation results for text detection. - - - Attributes: - segment: - Video segment where a text snippet was detected. - confidence: - Confidence for the track of detected text. It is calculated as - the highest over all frames where OCR detected text appears. - frames: - Information related to the frames where OCR detected text - appears. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.TextSegment) - }, -) -_sym_db.RegisterMessage(TextSegment) - -TextFrame = _reflection.GeneratedProtocolMessageType( - "TextFrame", - (_message.Message,), - { - "DESCRIPTOR": _TEXTFRAME, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Video frame level annotation results for text annotation (OCR). - Contains information regarding timestamp and bounding box locations - for the frames containing detected OCR text snippets. - - - Attributes: - rotated_bounding_box: - Bounding polygon of the detected text for this frame. - time_offset: - Timestamp of this frame. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.TextFrame) - }, -) -_sym_db.RegisterMessage(TextFrame) - -TextAnnotation = _reflection.GeneratedProtocolMessageType( - "TextAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _TEXTANNOTATION, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Annotations related to one detected OCR text snippet. This will - contain the corresponding text, confidence value, and frame level - information for each detection. - - - Attributes: - text: - The detected text. - segments: - All video segments where OCR detected text appears. - version: - Feature version. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.TextAnnotation) - }, -) -_sym_db.RegisterMessage(TextAnnotation) - -ObjectTrackingFrame = _reflection.GeneratedProtocolMessageType( - "ObjectTrackingFrame", - (_message.Message,), - { - "DESCRIPTOR": _OBJECTTRACKINGFRAME, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Video frame level annotations for object detection and tracking. This - field stores per frame location, time offset, and confidence. - - - Attributes: - normalized_bounding_box: - The normalized bounding box location of this object track for - the frame. - time_offset: - The timestamp of the frame in microseconds. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.ObjectTrackingFrame) - }, -) -_sym_db.RegisterMessage(ObjectTrackingFrame) - -ObjectTrackingAnnotation = _reflection.GeneratedProtocolMessageType( - "ObjectTrackingAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _OBJECTTRACKINGANNOTATION, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Annotations corresponding to one tracked object. - - - Attributes: - track_info: - Different representation of tracking info in non-streaming - batch and streaming modes. - segment: - Non-streaming batch mode ONLY. Each object track corresponds - to one video segment where it appears. - track_id: - Streaming mode ONLY. In streaming mode, we do not know the end - time of a tracked object before it is completed. Hence, there - is no VideoSegment info returned. Instead, we provide a unique - identifiable integer track_id so that the customers can - correlate the results of the ongoing ObjectTrackAnnotation of - the same track_id over time. - entity: - Entity to specify the object category that this track is - labeled as. - confidence: - Object category’s labeling confidence of this track. - frames: - Information corresponding to all frames where this object - track appears. Non-streaming batch mode: it may be one or - multiple ObjectTrackingFrame messages in frames. Streaming - mode: it can only be one ObjectTrackingFrame message in - frames. - version: - Feature version. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.ObjectTrackingAnnotation) - }, -) -_sym_db.RegisterMessage(ObjectTrackingAnnotation) - -LogoRecognitionAnnotation = _reflection.GeneratedProtocolMessageType( - "LogoRecognitionAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _LOGORECOGNITIONANNOTATION, - "__module__": "google.cloud.videointelligence_v1.proto.video_intelligence_pb2", - "__doc__": """Annotation corresponding to one detected, tracked and recognized logo - class. - - - Attributes: - entity: - Entity category information to specify the logo class that all - the logo tracks within this LogoRecognitionAnnotation are - recognized as. - tracks: - All logo tracks where the recognized logo appears. Each track - corresponds to one logo instance appearing in consecutive - frames. - segments: - All video segments where the recognized logo appears. There - might be multiple instances of the same logo class appearing - in one VideoSegment. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1.LogoRecognitionAnnotation) - }, -) -_sym_db.RegisterMessage(LogoRecognitionAnnotation) - - -DESCRIPTOR._options = None -_ANNOTATEVIDEOREQUEST.fields_by_name["features"]._options = None -_ANNOTATEVIDEOREQUEST.fields_by_name["output_uri"]._options = None -_ANNOTATEVIDEOREQUEST.fields_by_name["location_id"]._options = None -_FACEFRAME._options = None -_FACEANNOTATION._options = None -_TIMESTAMPEDOBJECT.fields_by_name["attributes"]._options = None -_TIMESTAMPEDOBJECT.fields_by_name["landmarks"]._options = None -_TRACK.fields_by_name["attributes"]._options = None -_TRACK.fields_by_name["confidence"]._options = None -_VIDEOANNOTATIONRESULTS.fields_by_name["face_annotations"]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name["language_code"]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name["max_alternatives"]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name["filter_profanity"]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name["speech_contexts"]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name[ - "enable_automatic_punctuation" -]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name["audio_tracks"]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name["enable_speaker_diarization"]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name["diarization_speaker_count"]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name["enable_word_confidence"]._options = None -_SPEECHCONTEXT.fields_by_name["phrases"]._options = None -_SPEECHTRANSCRIPTION.fields_by_name["language_code"]._options = None -_SPEECHRECOGNITIONALTERNATIVE.fields_by_name["confidence"]._options = None -_SPEECHRECOGNITIONALTERNATIVE.fields_by_name["words"]._options = None -_WORDINFO.fields_by_name["confidence"]._options = None -_WORDINFO.fields_by_name["speaker_tag"]._options = None - -_VIDEOINTELLIGENCESERVICE = _descriptor.ServiceDescriptor( - name="VideoIntelligenceService", - full_name="google.cloud.videointelligence.v1.VideoIntelligenceService", - file=DESCRIPTOR, - index=0, - serialized_options=b"\312A videointelligence.googleapis.com\322A.https://www.googleapis.com/auth/cloud-platform", - create_key=_descriptor._internal_create_key, - serialized_start=8921, - serialized_end=9241, - methods=[ - _descriptor.MethodDescriptor( - name="AnnotateVideo", - full_name="google.cloud.videointelligence.v1.VideoIntelligenceService.AnnotateVideo", - index=0, - containing_service=None, - input_type=_ANNOTATEVIDEOREQUEST, - output_type=google_dot_longrunning_dot_operations__pb2._OPERATION, - serialized_options=b'\202\323\344\223\002\030"\023/v1/videos:annotate:\001*\332A\022input_uri,features\312A.\n\025AnnotateVideoResponse\022\025AnnotateVideoProgress', - create_key=_descriptor._internal_create_key, - ), - ], -) -_sym_db.RegisterServiceDescriptor(_VIDEOINTELLIGENCESERVICE) - -DESCRIPTOR.services_by_name["VideoIntelligenceService"] = _VIDEOINTELLIGENCESERVICE - -# @@protoc_insertion_point(module_scope) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/proto/video_intelligence_pb2_grpc.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/proto/video_intelligence_pb2_grpc.py deleted file mode 100644 index faf647ea17d6..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/proto/video_intelligence_pb2_grpc.py +++ /dev/null @@ -1,92 +0,0 @@ -# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! -"""Client and server classes corresponding to protobuf-defined services.""" -import grpc - -from google.cloud.videointelligence_v1.proto import ( - video_intelligence_pb2 as google_dot_cloud_dot_videointelligence__v1_dot_proto_dot_video__intelligence__pb2, -) -from google.longrunning import ( - operations_pb2 as google_dot_longrunning_dot_operations__pb2, -) - - -class VideoIntelligenceServiceStub(object): - """Service that implements the Video Intelligence API. - """ - - def __init__(self, channel): - """Constructor. - - Args: - channel: A grpc.Channel. - """ - self.AnnotateVideo = channel.unary_unary( - "/google.cloud.videointelligence.v1.VideoIntelligenceService/AnnotateVideo", - request_serializer=google_dot_cloud_dot_videointelligence__v1_dot_proto_dot_video__intelligence__pb2.AnnotateVideoRequest.SerializeToString, - response_deserializer=google_dot_longrunning_dot_operations__pb2.Operation.FromString, - ) - - -class VideoIntelligenceServiceServicer(object): - """Service that implements the Video Intelligence API. - """ - - def AnnotateVideo(self, request, context): - """Performs asynchronous video annotation. Progress and results can be - retrieved through the `google.longrunning.Operations` interface. - `Operation.metadata` contains `AnnotateVideoProgress` (progress). - `Operation.response` contains `AnnotateVideoResponse` (results). - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - -def add_VideoIntelligenceServiceServicer_to_server(servicer, server): - rpc_method_handlers = { - "AnnotateVideo": grpc.unary_unary_rpc_method_handler( - servicer.AnnotateVideo, - request_deserializer=google_dot_cloud_dot_videointelligence__v1_dot_proto_dot_video__intelligence__pb2.AnnotateVideoRequest.FromString, - response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString, - ), - } - generic_handler = grpc.method_handlers_generic_handler( - "google.cloud.videointelligence.v1.VideoIntelligenceService", - rpc_method_handlers, - ) - server.add_generic_rpc_handlers((generic_handler,)) - - -# This class is part of an EXPERIMENTAL API. -class VideoIntelligenceService(object): - """Service that implements the Video Intelligence API. - """ - - @staticmethod - def AnnotateVideo( - request, - target, - options=(), - channel_credentials=None, - call_credentials=None, - insecure=False, - compression=None, - wait_for_ready=None, - timeout=None, - metadata=None, - ): - return grpc.experimental.unary_unary( - request, - target, - "/google.cloud.videointelligence.v1.VideoIntelligenceService/AnnotateVideo", - google_dot_cloud_dot_videointelligence__v1_dot_proto_dot_video__intelligence__pb2.AnnotateVideoRequest.SerializeToString, - google_dot_longrunning_dot_operations__pb2.Operation.FromString, - options, - channel_credentials, - insecure, - call_credentials, - compression, - wait_for_ready, - timeout, - metadata, - ) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/py.typed b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/py.typed new file mode 100644 index 000000000000..e7fb166bf3e5 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/py.typed @@ -0,0 +1,2 @@ +# Marker file for PEP 561. +# The google-cloud-videointelligence package uses inline types. diff --git a/packages/google-cloud-videointelligence/google/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/__init__.py similarity index 71% rename from packages/google-cloud-videointelligence/google/__init__.py rename to packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/__init__.py index 9a1b64a6d586..42ffdf2bc43d 100644 --- a/packages/google-cloud-videointelligence/google/__init__.py +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/__init__.py @@ -1,24 +1,16 @@ # -*- coding: utf-8 -*- -# + # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # -# https://www.apache.org/licenses/LICENSE-2.0 +# http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. - -try: - import pkg_resources - - pkg_resources.declare_namespace(__name__) -except ImportError: - import pkgutil - - __path__ = pkgutil.extend_path(__path__, __name__) +# diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/__init__.py similarity index 67% rename from packages/google-cloud-videointelligence/google/cloud/videointelligence.py rename to packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/__init__.py index 289cce4b141f..559320286ca2 100644 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence.py +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/__init__.py @@ -1,29 +1,24 @@ # -*- coding: utf-8 -*- -# + # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # -# https://www.apache.org/licenses/LICENSE-2.0 +# http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. +# - -from __future__ import absolute_import - -from google.cloud.videointelligence_v1 import VideoIntelligenceServiceClient -from google.cloud.videointelligence_v1 import enums -from google.cloud.videointelligence_v1 import types - +from .client import VideoIntelligenceServiceClient +from .async_client import VideoIntelligenceServiceAsyncClient __all__ = ( - "enums", - "types", "VideoIntelligenceServiceClient", + "VideoIntelligenceServiceAsyncClient", ) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/async_client.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/async_client.py new file mode 100644 index 000000000000..e4e0e5428786 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/async_client.py @@ -0,0 +1,270 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from collections import OrderedDict +import functools +import re +from typing import Dict, Sequence, Tuple, Type, Union +import pkg_resources + +import google.api_core.client_options as ClientOptions # type: ignore +from google.api_core import exceptions # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google.api_core import retry as retries # type: ignore +from google.auth import credentials # type: ignore +from google.oauth2 import service_account # type: ignore + +from google.api_core import operation # type: ignore +from google.api_core import operation_async # type: ignore +from google.cloud.videointelligence_v1.types import video_intelligence + +from .transports.base import VideoIntelligenceServiceTransport, DEFAULT_CLIENT_INFO +from .transports.grpc_asyncio import VideoIntelligenceServiceGrpcAsyncIOTransport +from .client import VideoIntelligenceServiceClient + + +class VideoIntelligenceServiceAsyncClient: + """Service that implements the Video Intelligence API.""" + + _client: VideoIntelligenceServiceClient + + DEFAULT_ENDPOINT = VideoIntelligenceServiceClient.DEFAULT_ENDPOINT + DEFAULT_MTLS_ENDPOINT = VideoIntelligenceServiceClient.DEFAULT_MTLS_ENDPOINT + + common_billing_account_path = staticmethod( + VideoIntelligenceServiceClient.common_billing_account_path + ) + parse_common_billing_account_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_billing_account_path + ) + + common_folder_path = staticmethod(VideoIntelligenceServiceClient.common_folder_path) + parse_common_folder_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_folder_path + ) + + common_organization_path = staticmethod( + VideoIntelligenceServiceClient.common_organization_path + ) + parse_common_organization_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_organization_path + ) + + common_project_path = staticmethod( + VideoIntelligenceServiceClient.common_project_path + ) + parse_common_project_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_project_path + ) + + common_location_path = staticmethod( + VideoIntelligenceServiceClient.common_location_path + ) + parse_common_location_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_location_path + ) + + from_service_account_file = VideoIntelligenceServiceClient.from_service_account_file + from_service_account_json = from_service_account_file + + @property + def transport(self) -> VideoIntelligenceServiceTransport: + """Return the transport used by the client instance. + + Returns: + VideoIntelligenceServiceTransport: The transport used by the client instance. + """ + return self._client.transport + + get_transport_class = functools.partial( + type(VideoIntelligenceServiceClient).get_transport_class, + type(VideoIntelligenceServiceClient), + ) + + def __init__( + self, + *, + credentials: credentials.Credentials = None, + transport: Union[str, VideoIntelligenceServiceTransport] = "grpc_asyncio", + client_options: ClientOptions = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the video intelligence service client. + + Args: + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + transport (Union[str, ~.VideoIntelligenceServiceTransport]): The + transport to use. If set to None, a transport is chosen + automatically. + client_options (ClientOptions): Custom options for the client. It + won't take effect if a ``transport`` instance is provided. + (1) The ``api_endpoint`` property can be used to override the + default endpoint provided by the client. GOOGLE_API_USE_MTLS_ENDPOINT + environment variable can also be used to override the endpoint: + "always" (always use the default mTLS endpoint), "never" (always + use the default regular endpoint) and "auto" (auto switch to the + default mTLS endpoint if client certificate is present, this is + the default value). However, the ``api_endpoint`` property takes + precedence if provided. + (2) If GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable + is "true", then the ``client_cert_source`` property can be used + to provide client certificate for mutual TLS transport. If + not provided, the default SSL client certificate will be used if + present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not + set, no client certificate will be used. + + Raises: + google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport + creation failed for any reason. + """ + + self._client = VideoIntelligenceServiceClient( + credentials=credentials, + transport=transport, + client_options=client_options, + client_info=client_info, + ) + + async def annotate_video( + self, + request: video_intelligence.AnnotateVideoRequest = None, + *, + input_uri: str = None, + features: Sequence[video_intelligence.Feature] = None, + retry: retries.Retry = gapic_v1.method.DEFAULT, + timeout: float = None, + metadata: Sequence[Tuple[str, str]] = (), + ) -> operation_async.AsyncOperation: + r"""Performs asynchronous video annotation. Progress and results can + be retrieved through the ``google.longrunning.Operations`` + interface. ``Operation.metadata`` contains + ``AnnotateVideoProgress`` (progress). ``Operation.response`` + contains ``AnnotateVideoResponse`` (results). + + Args: + request (:class:`~.video_intelligence.AnnotateVideoRequest`): + The request object. Video annotation request. + input_uri (:class:`str`): + Input video location. Currently, only `Cloud + Storage `__ URIs are + supported. URIs must be specified in the following + format: ``gs://bucket-id/object-id`` (other URI formats + return + [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). + For more information, see `Request + URIs `__. + To identify multiple videos, a video URI may include + wildcards in the ``object-id``. Supported wildcards: '*' + to match 0 or more characters; '?' to match 1 character. + If unset, the input video should be embedded in the + request as ``input_content``. If set, ``input_content`` + must be unset. + This corresponds to the ``input_uri`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + features (:class:`Sequence[~.video_intelligence.Feature]`): + Required. Requested video annotation + features. + This corresponds to the ``features`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, str]]): Strings which should be + sent along with the request as metadata. + + Returns: + ~.operation_async.AsyncOperation: + An object representing a long-running operation. + + The result type for the operation will be + :class:``~.video_intelligence.AnnotateVideoResponse``: + Video annotation response. Included in the ``response`` + field of the ``Operation`` returned by the + ``GetOperation`` call of the + ``google::longrunning::Operations`` service. + + """ + # Create or coerce a protobuf request object. + # Sanity check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + has_flattened_params = any([input_uri, features]) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + request = video_intelligence.AnnotateVideoRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + + if input_uri is not None: + request.input_uri = input_uri + + if features: + request.features.extend(features) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = gapic_v1.method_async.wrap_method( + self._client._transport.annotate_video, + default_retry=retries.Retry( + initial=1.0, + maximum=120.0, + multiplier=2.5, + predicate=retries.if_exception_type( + exceptions.DeadlineExceeded, exceptions.ServiceUnavailable, + ), + ), + default_timeout=600.0, + client_info=DEFAULT_CLIENT_INFO, + ) + + # Send the request. + response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,) + + # Wrap the response in an operation future. + response = operation_async.from_gapic( + response, + self._client._transport.operations_client, + video_intelligence.AnnotateVideoResponse, + metadata_type=video_intelligence.AnnotateVideoProgress, + ) + + # Done; return the response. + return response + + +try: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=pkg_resources.get_distribution( + "google-cloud-videointelligence", + ).version, + ) +except pkg_resources.DistributionNotFound: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo() + + +__all__ = ("VideoIntelligenceServiceAsyncClient",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/client.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/client.py new file mode 100644 index 000000000000..88b3c34c1fcb --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/client.py @@ -0,0 +1,438 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from collections import OrderedDict +from distutils import util +import os +import re +from typing import Callable, Dict, Optional, Sequence, Tuple, Type, Union +import pkg_resources + +from google.api_core import client_options as client_options_lib # type: ignore +from google.api_core import exceptions # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google.api_core import retry as retries # type: ignore +from google.auth import credentials # type: ignore +from google.auth.transport import mtls # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore +from google.auth.exceptions import MutualTLSChannelError # type: ignore +from google.oauth2 import service_account # type: ignore + +from google.api_core import operation # type: ignore +from google.api_core import operation_async # type: ignore +from google.cloud.videointelligence_v1.types import video_intelligence + +from .transports.base import VideoIntelligenceServiceTransport, DEFAULT_CLIENT_INFO +from .transports.grpc import VideoIntelligenceServiceGrpcTransport +from .transports.grpc_asyncio import VideoIntelligenceServiceGrpcAsyncIOTransport + + +class VideoIntelligenceServiceClientMeta(type): + """Metaclass for the VideoIntelligenceService client. + + This provides class-level methods for building and retrieving + support objects (e.g. transport) without polluting the client instance + objects. + """ + + _transport_registry = ( + OrderedDict() + ) # type: Dict[str, Type[VideoIntelligenceServiceTransport]] + _transport_registry["grpc"] = VideoIntelligenceServiceGrpcTransport + _transport_registry["grpc_asyncio"] = VideoIntelligenceServiceGrpcAsyncIOTransport + + def get_transport_class( + cls, label: str = None, + ) -> Type[VideoIntelligenceServiceTransport]: + """Return an appropriate transport class. + + Args: + label: The name of the desired transport. If none is + provided, then the first transport in the registry is used. + + Returns: + The transport class to use. + """ + # If a specific transport is requested, return that one. + if label: + return cls._transport_registry[label] + + # No transport is requested; return the default (that is, the first one + # in the dictionary). + return next(iter(cls._transport_registry.values())) + + +class VideoIntelligenceServiceClient(metaclass=VideoIntelligenceServiceClientMeta): + """Service that implements the Video Intelligence API.""" + + @staticmethod + def _get_default_mtls_endpoint(api_endpoint): + """Convert api endpoint to mTLS endpoint. + Convert "*.sandbox.googleapis.com" and "*.googleapis.com" to + "*.mtls.sandbox.googleapis.com" and "*.mtls.googleapis.com" respectively. + Args: + api_endpoint (Optional[str]): the api endpoint to convert. + Returns: + str: converted mTLS api endpoint. + """ + if not api_endpoint: + return api_endpoint + + mtls_endpoint_re = re.compile( + r"(?P[^.]+)(?P\.mtls)?(?P\.sandbox)?(?P\.googleapis\.com)?" + ) + + m = mtls_endpoint_re.match(api_endpoint) + name, mtls, sandbox, googledomain = m.groups() + if mtls or not googledomain: + return api_endpoint + + if sandbox: + return api_endpoint.replace( + "sandbox.googleapis.com", "mtls.sandbox.googleapis.com" + ) + + return api_endpoint.replace(".googleapis.com", ".mtls.googleapis.com") + + DEFAULT_ENDPOINT = "videointelligence.googleapis.com" + DEFAULT_MTLS_ENDPOINT = _get_default_mtls_endpoint.__func__( # type: ignore + DEFAULT_ENDPOINT + ) + + @classmethod + def from_service_account_file(cls, filename: str, *args, **kwargs): + """Creates an instance of this client using the provided credentials + file. + + Args: + filename (str): The path to the service account private key json + file. + args: Additional arguments to pass to the constructor. + kwargs: Additional arguments to pass to the constructor. + + Returns: + {@api.name}: The constructed client. + """ + credentials = service_account.Credentials.from_service_account_file(filename) + kwargs["credentials"] = credentials + return cls(*args, **kwargs) + + from_service_account_json = from_service_account_file + + @property + def transport(self) -> VideoIntelligenceServiceTransport: + """Return the transport used by the client instance. + + Returns: + VideoIntelligenceServiceTransport: The transport used by the client instance. + """ + return self._transport + + @staticmethod + def common_billing_account_path(billing_account: str,) -> str: + """Return a fully-qualified billing_account string.""" + return "billingAccounts/{billing_account}".format( + billing_account=billing_account, + ) + + @staticmethod + def parse_common_billing_account_path(path: str) -> Dict[str, str]: + """Parse a billing_account path into its component segments.""" + m = re.match(r"^billingAccounts/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_folder_path(folder: str,) -> str: + """Return a fully-qualified folder string.""" + return "folders/{folder}".format(folder=folder,) + + @staticmethod + def parse_common_folder_path(path: str) -> Dict[str, str]: + """Parse a folder path into its component segments.""" + m = re.match(r"^folders/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_organization_path(organization: str,) -> str: + """Return a fully-qualified organization string.""" + return "organizations/{organization}".format(organization=organization,) + + @staticmethod + def parse_common_organization_path(path: str) -> Dict[str, str]: + """Parse a organization path into its component segments.""" + m = re.match(r"^organizations/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_project_path(project: str,) -> str: + """Return a fully-qualified project string.""" + return "projects/{project}".format(project=project,) + + @staticmethod + def parse_common_project_path(path: str) -> Dict[str, str]: + """Parse a project path into its component segments.""" + m = re.match(r"^projects/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_location_path(project: str, location: str,) -> str: + """Return a fully-qualified location string.""" + return "projects/{project}/locations/{location}".format( + project=project, location=location, + ) + + @staticmethod + def parse_common_location_path(path: str) -> Dict[str, str]: + """Parse a location path into its component segments.""" + m = re.match(r"^projects/(?P.+?)/locations/(?P.+?)$", path) + return m.groupdict() if m else {} + + def __init__( + self, + *, + credentials: Optional[credentials.Credentials] = None, + transport: Union[str, VideoIntelligenceServiceTransport, None] = None, + client_options: Optional[client_options_lib.ClientOptions] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the video intelligence service client. + + Args: + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + transport (Union[str, ~.VideoIntelligenceServiceTransport]): The + transport to use. If set to None, a transport is chosen + automatically. + client_options (client_options_lib.ClientOptions): Custom options for the + client. It won't take effect if a ``transport`` instance is provided. + (1) The ``api_endpoint`` property can be used to override the + default endpoint provided by the client. GOOGLE_API_USE_MTLS_ENDPOINT + environment variable can also be used to override the endpoint: + "always" (always use the default mTLS endpoint), "never" (always + use the default regular endpoint) and "auto" (auto switch to the + default mTLS endpoint if client certificate is present, this is + the default value). However, the ``api_endpoint`` property takes + precedence if provided. + (2) If GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable + is "true", then the ``client_cert_source`` property can be used + to provide client certificate for mutual TLS transport. If + not provided, the default SSL client certificate will be used if + present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not + set, no client certificate will be used. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If mutual TLS transport + creation failed for any reason. + """ + if isinstance(client_options, dict): + client_options = client_options_lib.from_dict(client_options) + if client_options is None: + client_options = client_options_lib.ClientOptions() + + # Create SSL credentials for mutual TLS if needed. + use_client_cert = bool( + util.strtobool(os.getenv("GOOGLE_API_USE_CLIENT_CERTIFICATE", "false")) + ) + + ssl_credentials = None + is_mtls = False + if use_client_cert: + if client_options.client_cert_source: + import grpc # type: ignore + + cert, key = client_options.client_cert_source() + ssl_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + is_mtls = True + else: + creds = SslCredentials() + is_mtls = creds.is_mtls + ssl_credentials = creds.ssl_credentials if is_mtls else None + + # Figure out which api endpoint to use. + if client_options.api_endpoint is not None: + api_endpoint = client_options.api_endpoint + else: + use_mtls_env = os.getenv("GOOGLE_API_USE_MTLS_ENDPOINT", "auto") + if use_mtls_env == "never": + api_endpoint = self.DEFAULT_ENDPOINT + elif use_mtls_env == "always": + api_endpoint = self.DEFAULT_MTLS_ENDPOINT + elif use_mtls_env == "auto": + api_endpoint = ( + self.DEFAULT_MTLS_ENDPOINT if is_mtls else self.DEFAULT_ENDPOINT + ) + else: + raise MutualTLSChannelError( + "Unsupported GOOGLE_API_USE_MTLS_ENDPOINT value. Accepted values: never, auto, always" + ) + + # Save or instantiate the transport. + # Ordinarily, we provide the transport, but allowing a custom transport + # instance provides an extensibility point for unusual situations. + if isinstance(transport, VideoIntelligenceServiceTransport): + # transport is a VideoIntelligenceServiceTransport instance. + if credentials or client_options.credentials_file: + raise ValueError( + "When providing a transport instance, " + "provide its credentials directly." + ) + if client_options.scopes: + raise ValueError( + "When providing a transport instance, " + "provide its scopes directly." + ) + self._transport = transport + else: + Transport = type(self).get_transport_class(transport) + self._transport = Transport( + credentials=credentials, + credentials_file=client_options.credentials_file, + host=api_endpoint, + scopes=client_options.scopes, + ssl_channel_credentials=ssl_credentials, + quota_project_id=client_options.quota_project_id, + client_info=client_info, + ) + + def annotate_video( + self, + request: video_intelligence.AnnotateVideoRequest = None, + *, + input_uri: str = None, + features: Sequence[video_intelligence.Feature] = None, + retry: retries.Retry = gapic_v1.method.DEFAULT, + timeout: float = None, + metadata: Sequence[Tuple[str, str]] = (), + ) -> operation.Operation: + r"""Performs asynchronous video annotation. Progress and results can + be retrieved through the ``google.longrunning.Operations`` + interface. ``Operation.metadata`` contains + ``AnnotateVideoProgress`` (progress). ``Operation.response`` + contains ``AnnotateVideoResponse`` (results). + + Args: + request (:class:`~.video_intelligence.AnnotateVideoRequest`): + The request object. Video annotation request. + input_uri (:class:`str`): + Input video location. Currently, only `Cloud + Storage `__ URIs are + supported. URIs must be specified in the following + format: ``gs://bucket-id/object-id`` (other URI formats + return + [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). + For more information, see `Request + URIs `__. + To identify multiple videos, a video URI may include + wildcards in the ``object-id``. Supported wildcards: '*' + to match 0 or more characters; '?' to match 1 character. + If unset, the input video should be embedded in the + request as ``input_content``. If set, ``input_content`` + must be unset. + This corresponds to the ``input_uri`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + features (:class:`Sequence[~.video_intelligence.Feature]`): + Required. Requested video annotation + features. + This corresponds to the ``features`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, str]]): Strings which should be + sent along with the request as metadata. + + Returns: + ~.operation.Operation: + An object representing a long-running operation. + + The result type for the operation will be + :class:``~.video_intelligence.AnnotateVideoResponse``: + Video annotation response. Included in the ``response`` + field of the ``Operation`` returned by the + ``GetOperation`` call of the + ``google::longrunning::Operations`` service. + + """ + # Create or coerce a protobuf request object. + # Sanity check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + has_flattened_params = any([input_uri, features]) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # Minor optimization to avoid making a copy if the user passes + # in a video_intelligence.AnnotateVideoRequest. + # There's no risk of modifying the input as we've already verified + # there are no flattened fields. + if not isinstance(request, video_intelligence.AnnotateVideoRequest): + request = video_intelligence.AnnotateVideoRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + + if input_uri is not None: + request.input_uri = input_uri + + if features: + request.features.extend(features) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.annotate_video] + + # Send the request. + response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,) + + # Wrap the response in an operation future. + response = operation.from_gapic( + response, + self._transport.operations_client, + video_intelligence.AnnotateVideoResponse, + metadata_type=video_intelligence.AnnotateVideoProgress, + ) + + # Done; return the response. + return response + + +try: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=pkg_resources.get_distribution( + "google-cloud-videointelligence", + ).version, + ) +except pkg_resources.DistributionNotFound: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo() + + +__all__ = ("VideoIntelligenceServiceClient",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/transports/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/transports/__init__.py new file mode 100644 index 000000000000..08de302230ba --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/transports/__init__.py @@ -0,0 +1,38 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from collections import OrderedDict +from typing import Dict, Type + +from .base import VideoIntelligenceServiceTransport +from .grpc import VideoIntelligenceServiceGrpcTransport +from .grpc_asyncio import VideoIntelligenceServiceGrpcAsyncIOTransport + + +# Compile a registry of transports. +_transport_registry = ( + OrderedDict() +) # type: Dict[str, Type[VideoIntelligenceServiceTransport]] +_transport_registry["grpc"] = VideoIntelligenceServiceGrpcTransport +_transport_registry["grpc_asyncio"] = VideoIntelligenceServiceGrpcAsyncIOTransport + + +__all__ = ( + "VideoIntelligenceServiceTransport", + "VideoIntelligenceServiceGrpcTransport", + "VideoIntelligenceServiceGrpcAsyncIOTransport", +) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/transports/base.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/transports/base.py new file mode 100644 index 000000000000..c2764892ca07 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/transports/base.py @@ -0,0 +1,141 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import abc +import typing +import pkg_resources + +from google import auth # type: ignore +from google.api_core import exceptions # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google.api_core import retry as retries # type: ignore +from google.api_core import operations_v1 # type: ignore +from google.auth import credentials # type: ignore + +from google.cloud.videointelligence_v1.types import video_intelligence +from google.longrunning import operations_pb2 as operations # type: ignore + + +try: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=pkg_resources.get_distribution( + "google-cloud-videointelligence", + ).version, + ) +except pkg_resources.DistributionNotFound: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo() + + +class VideoIntelligenceServiceTransport(abc.ABC): + """Abstract transport class for VideoIntelligenceService.""" + + AUTH_SCOPES = ("https://www.googleapis.com/auth/cloud-platform",) + + def __init__( + self, + *, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: typing.Optional[str] = None, + scopes: typing.Optional[typing.Sequence[str]] = AUTH_SCOPES, + quota_project_id: typing.Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + **kwargs, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): The hostname to connect to. + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is mutually exclusive with credentials. + scope (Optional[Sequence[str]]): A list of scopes. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + """ + # Save the hostname. Default to port 443 (HTTPS) if none is specified. + if ":" not in host: + host += ":443" + self._host = host + + # If no credentials are provided, then determine the appropriate + # defaults. + if credentials and credentials_file: + raise exceptions.DuplicateCredentialArgs( + "'credentials_file' and 'credentials' are mutually exclusive" + ) + + if credentials_file is not None: + credentials, _ = auth.load_credentials_from_file( + credentials_file, scopes=scopes, quota_project_id=quota_project_id + ) + + elif credentials is None: + credentials, _ = auth.default( + scopes=scopes, quota_project_id=quota_project_id + ) + + # Save the credentials. + self._credentials = credentials + + # Lifted into its own function so it can be stubbed out during tests. + self._prep_wrapped_messages(client_info) + + def _prep_wrapped_messages(self, client_info): + # Precompute the wrapped methods. + self._wrapped_methods = { + self.annotate_video: gapic_v1.method.wrap_method( + self.annotate_video, + default_retry=retries.Retry( + initial=1.0, + maximum=120.0, + multiplier=2.5, + predicate=retries.if_exception_type( + exceptions.DeadlineExceeded, exceptions.ServiceUnavailable, + ), + ), + default_timeout=600.0, + client_info=client_info, + ), + } + + @property + def operations_client(self) -> operations_v1.OperationsClient: + """Return the client designed to process long-running operations.""" + raise NotImplementedError() + + @property + def annotate_video( + self, + ) -> typing.Callable[ + [video_intelligence.AnnotateVideoRequest], + typing.Union[operations.Operation, typing.Awaitable[operations.Operation]], + ]: + raise NotImplementedError() + + +__all__ = ("VideoIntelligenceServiceTransport",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/transports/grpc.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/transports/grpc.py new file mode 100644 index 000000000000..e9f9cedf851c --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/transports/grpc.py @@ -0,0 +1,281 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import warnings +from typing import Callable, Dict, Optional, Sequence, Tuple + +from google.api_core import grpc_helpers # type: ignore +from google.api_core import operations_v1 # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google import auth # type: ignore +from google.auth import credentials # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore + +import grpc # type: ignore + +from google.cloud.videointelligence_v1.types import video_intelligence +from google.longrunning import operations_pb2 as operations # type: ignore + +from .base import VideoIntelligenceServiceTransport, DEFAULT_CLIENT_INFO + + +class VideoIntelligenceServiceGrpcTransport(VideoIntelligenceServiceTransport): + """gRPC backend transport for VideoIntelligenceService. + + Service that implements the Video Intelligence API. + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends protocol buffers over the wire using gRPC (which is built on + top of HTTP/2); the ``grpcio`` package must be installed. + """ + + _stubs: Dict[str, Callable] + + def __init__( + self, + *, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: str = None, + scopes: Sequence[str] = None, + channel: grpc.Channel = None, + api_mtls_endpoint: str = None, + client_cert_source: Callable[[], Tuple[bytes, bytes]] = None, + ssl_channel_credentials: grpc.ChannelCredentials = None, + quota_project_id: Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): The hostname to connect to. + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + This argument is ignored if ``channel`` is provided. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if ``channel`` is provided. + scopes (Optional(Sequence[str])): A list of scopes. This argument is + ignored if ``channel`` is provided. + channel (Optional[grpc.Channel]): A ``Channel`` instance through + which to make calls. + api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint. + If provided, it overrides the ``host`` argument and tries to create + a mutual TLS channel with client SSL credentials from + ``client_cert_source`` or applicatin default SSL credentials. + client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]): + Deprecated. A callback to provide client SSL certificate bytes and + private key bytes, both in PEM format. It is ignored if + ``api_mtls_endpoint`` is None. + ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials + for grpc channel. It is ignored if ``channel`` is provided. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If mutual TLS transport + creation failed for any reason. + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + self._ssl_channel_credentials = ssl_channel_credentials + + if channel: + # Sanity check: Ensure that channel and credentials are not both + # provided. + credentials = False + + # If a channel was explicitly provided, set it. + self._grpc_channel = channel + self._ssl_channel_credentials = None + elif api_mtls_endpoint: + warnings.warn( + "api_mtls_endpoint and client_cert_source are deprecated", + DeprecationWarning, + ) + + host = ( + api_mtls_endpoint + if ":" in api_mtls_endpoint + else api_mtls_endpoint + ":443" + ) + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # Create SSL credentials with client_cert_source or application + # default SSL credentials. + if client_cert_source: + cert, key = client_cert_source() + ssl_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + else: + ssl_credentials = SslCredentials().ssl_credentials + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + self._ssl_channel_credentials = ssl_credentials + else: + host = host if ":" in host else host + ":443" + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_channel_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + + self._stubs = {} # type: Dict[str, Callable] + + # Run the base constructor. + super().__init__( + host=host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + client_info=client_info, + ) + + @classmethod + def create_channel( + cls, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: str = None, + scopes: Optional[Sequence[str]] = None, + quota_project_id: Optional[str] = None, + **kwargs, + ) -> grpc.Channel: + """Create and return a gRPC channel object. + Args: + address (Optionsl[str]): The host for the channel to use. + credentials (Optional[~.Credentials]): The + authorization credentials to attach to requests. These + credentials identify this application to the service. If + none are specified, the client will attempt to ascertain + the credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is mutually exclusive with credentials. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + kwargs (Optional[dict]): Keyword arguments, which are passed to the + channel creation. + Returns: + grpc.Channel: A gRPC channel object. + + Raises: + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + scopes = scopes or cls.AUTH_SCOPES + return grpc_helpers.create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes, + quota_project_id=quota_project_id, + **kwargs, + ) + + @property + def grpc_channel(self) -> grpc.Channel: + """Return the channel designed to connect to this service. + """ + return self._grpc_channel + + @property + def operations_client(self) -> operations_v1.OperationsClient: + """Create the client designed to process long-running operations. + + This property caches on the instance; repeated calls return the same + client. + """ + # Sanity check: Only create a new client if we do not already have one. + if "operations_client" not in self.__dict__: + self.__dict__["operations_client"] = operations_v1.OperationsClient( + self.grpc_channel + ) + + # Return the client from cache. + return self.__dict__["operations_client"] + + @property + def annotate_video( + self, + ) -> Callable[[video_intelligence.AnnotateVideoRequest], operations.Operation]: + r"""Return a callable for the annotate video method over gRPC. + + Performs asynchronous video annotation. Progress and results can + be retrieved through the ``google.longrunning.Operations`` + interface. ``Operation.metadata`` contains + ``AnnotateVideoProgress`` (progress). ``Operation.response`` + contains ``AnnotateVideoResponse`` (results). + + Returns: + Callable[[~.AnnotateVideoRequest], + ~.Operation]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "annotate_video" not in self._stubs: + self._stubs["annotate_video"] = self.grpc_channel.unary_unary( + "/google.cloud.videointelligence.v1.VideoIntelligenceService/AnnotateVideo", + request_serializer=video_intelligence.AnnotateVideoRequest.serialize, + response_deserializer=operations.Operation.FromString, + ) + return self._stubs["annotate_video"] + + +__all__ = ("VideoIntelligenceServiceGrpcTransport",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/transports/grpc_asyncio.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/transports/grpc_asyncio.py new file mode 100644 index 000000000000..956ab4bbed77 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/services/video_intelligence_service/transports/grpc_asyncio.py @@ -0,0 +1,287 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import warnings +from typing import Awaitable, Callable, Dict, Optional, Sequence, Tuple + +from google.api_core import gapic_v1 # type: ignore +from google.api_core import grpc_helpers_async # type: ignore +from google.api_core import operations_v1 # type: ignore +from google import auth # type: ignore +from google.auth import credentials # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore + +import grpc # type: ignore +from grpc.experimental import aio # type: ignore + +from google.cloud.videointelligence_v1.types import video_intelligence +from google.longrunning import operations_pb2 as operations # type: ignore + +from .base import VideoIntelligenceServiceTransport, DEFAULT_CLIENT_INFO +from .grpc import VideoIntelligenceServiceGrpcTransport + + +class VideoIntelligenceServiceGrpcAsyncIOTransport(VideoIntelligenceServiceTransport): + """gRPC AsyncIO backend transport for VideoIntelligenceService. + + Service that implements the Video Intelligence API. + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends protocol buffers over the wire using gRPC (which is built on + top of HTTP/2); the ``grpcio`` package must be installed. + """ + + _grpc_channel: aio.Channel + _stubs: Dict[str, Callable] = {} + + @classmethod + def create_channel( + cls, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + quota_project_id: Optional[str] = None, + **kwargs, + ) -> aio.Channel: + """Create and return a gRPC AsyncIO channel object. + Args: + address (Optional[str]): The host for the channel to use. + credentials (Optional[~.Credentials]): The + authorization credentials to attach to requests. These + credentials identify this application to the service. If + none are specified, the client will attempt to ascertain + the credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if ``channel`` is provided. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + kwargs (Optional[dict]): Keyword arguments, which are passed to the + channel creation. + Returns: + aio.Channel: A gRPC AsyncIO channel object. + """ + scopes = scopes or cls.AUTH_SCOPES + return grpc_helpers_async.create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes, + quota_project_id=quota_project_id, + **kwargs, + ) + + def __init__( + self, + *, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + channel: aio.Channel = None, + api_mtls_endpoint: str = None, + client_cert_source: Callable[[], Tuple[bytes, bytes]] = None, + ssl_channel_credentials: grpc.ChannelCredentials = None, + quota_project_id=None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): The hostname to connect to. + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + This argument is ignored if ``channel`` is provided. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if ``channel`` is provided. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + channel (Optional[aio.Channel]): A ``Channel`` instance through + which to make calls. + api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint. + If provided, it overrides the ``host`` argument and tries to create + a mutual TLS channel with client SSL credentials from + ``client_cert_source`` or applicatin default SSL credentials. + client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]): + Deprecated. A callback to provide client SSL certificate bytes and + private key bytes, both in PEM format. It is ignored if + ``api_mtls_endpoint`` is None. + ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials + for grpc channel. It is ignored if ``channel`` is provided. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport + creation failed for any reason. + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + self._ssl_channel_credentials = ssl_channel_credentials + + if channel: + # Sanity check: Ensure that channel and credentials are not both + # provided. + credentials = False + + # If a channel was explicitly provided, set it. + self._grpc_channel = channel + self._ssl_channel_credentials = None + elif api_mtls_endpoint: + warnings.warn( + "api_mtls_endpoint and client_cert_source are deprecated", + DeprecationWarning, + ) + + host = ( + api_mtls_endpoint + if ":" in api_mtls_endpoint + else api_mtls_endpoint + ":443" + ) + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # Create SSL credentials with client_cert_source or application + # default SSL credentials. + if client_cert_source: + cert, key = client_cert_source() + ssl_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + else: + ssl_credentials = SslCredentials().ssl_credentials + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + self._ssl_channel_credentials = ssl_credentials + else: + host = host if ":" in host else host + ":443" + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_channel_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + + # Run the base constructor. + super().__init__( + host=host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + client_info=client_info, + ) + + self._stubs = {} + + @property + def grpc_channel(self) -> aio.Channel: + """Create the channel designed to connect to this service. + + This property caches on the instance; repeated calls return + the same channel. + """ + # Return the channel from cache. + return self._grpc_channel + + @property + def operations_client(self) -> operations_v1.OperationsAsyncClient: + """Create the client designed to process long-running operations. + + This property caches on the instance; repeated calls return the same + client. + """ + # Sanity check: Only create a new client if we do not already have one. + if "operations_client" not in self.__dict__: + self.__dict__["operations_client"] = operations_v1.OperationsAsyncClient( + self.grpc_channel + ) + + # Return the client from cache. + return self.__dict__["operations_client"] + + @property + def annotate_video( + self, + ) -> Callable[ + [video_intelligence.AnnotateVideoRequest], Awaitable[operations.Operation] + ]: + r"""Return a callable for the annotate video method over gRPC. + + Performs asynchronous video annotation. Progress and results can + be retrieved through the ``google.longrunning.Operations`` + interface. ``Operation.metadata`` contains + ``AnnotateVideoProgress`` (progress). ``Operation.response`` + contains ``AnnotateVideoResponse`` (results). + + Returns: + Callable[[~.AnnotateVideoRequest], + Awaitable[~.Operation]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "annotate_video" not in self._stubs: + self._stubs["annotate_video"] = self.grpc_channel.unary_unary( + "/google.cloud.videointelligence.v1.VideoIntelligenceService/AnnotateVideo", + request_serializer=video_intelligence.AnnotateVideoRequest.serialize, + response_deserializer=operations.Operation.FromString, + ) + return self._stubs["annotate_video"] + + +__all__ = ("VideoIntelligenceServiceGrpcAsyncIOTransport",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/types.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/types.py deleted file mode 100644 index 8a467958e5da..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/types.py +++ /dev/null @@ -1,56 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from __future__ import absolute_import -import sys - -from google.api_core.protobuf_helpers import get_messages - -from google.cloud.videointelligence_v1.proto import video_intelligence_pb2 -from google.longrunning import operations_pb2 -from google.protobuf import any_pb2 -from google.protobuf import duration_pb2 -from google.protobuf import timestamp_pb2 -from google.rpc import status_pb2 - - -_shared_modules = [ - operations_pb2, - any_pb2, - duration_pb2, - timestamp_pb2, - status_pb2, -] - -_local_modules = [ - video_intelligence_pb2, -] - -names = [] - -for module in _shared_modules: # pragma: NO COVER - for name, message in get_messages(module).items(): - setattr(sys.modules[__name__], name, message) - names.append(name) -for module in _local_modules: - for name, message in get_messages(module).items(): - message.__module__ = "google.cloud.videointelligence_v1.types" - setattr(sys.modules[__name__], name, message) - names.append(name) - - -__all__ = tuple(sorted(names)) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/types/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/types/__init__.py new file mode 100644 index 000000000000..929f953df7ab --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/types/__init__.py @@ -0,0 +1,109 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from .video_intelligence import ( + AnnotateVideoRequest, + VideoContext, + LabelDetectionConfig, + ShotChangeDetectionConfig, + ObjectTrackingConfig, + FaceDetectionConfig, + PersonDetectionConfig, + ExplicitContentDetectionConfig, + TextDetectionConfig, + VideoSegment, + LabelSegment, + LabelFrame, + Entity, + LabelAnnotation, + ExplicitContentFrame, + ExplicitContentAnnotation, + NormalizedBoundingBox, + FaceDetectionAnnotation, + PersonDetectionAnnotation, + FaceSegment, + FaceFrame, + FaceAnnotation, + TimestampedObject, + Track, + DetectedAttribute, + DetectedLandmark, + VideoAnnotationResults, + AnnotateVideoResponse, + VideoAnnotationProgress, + AnnotateVideoProgress, + SpeechTranscriptionConfig, + SpeechContext, + SpeechTranscription, + SpeechRecognitionAlternative, + WordInfo, + NormalizedVertex, + NormalizedBoundingPoly, + TextSegment, + TextFrame, + TextAnnotation, + ObjectTrackingFrame, + ObjectTrackingAnnotation, + LogoRecognitionAnnotation, +) + + +__all__ = ( + "AnnotateVideoRequest", + "VideoContext", + "LabelDetectionConfig", + "ShotChangeDetectionConfig", + "ObjectTrackingConfig", + "FaceDetectionConfig", + "PersonDetectionConfig", + "ExplicitContentDetectionConfig", + "TextDetectionConfig", + "VideoSegment", + "LabelSegment", + "LabelFrame", + "Entity", + "LabelAnnotation", + "ExplicitContentFrame", + "ExplicitContentAnnotation", + "NormalizedBoundingBox", + "FaceDetectionAnnotation", + "PersonDetectionAnnotation", + "FaceSegment", + "FaceFrame", + "FaceAnnotation", + "TimestampedObject", + "Track", + "DetectedAttribute", + "DetectedLandmark", + "VideoAnnotationResults", + "AnnotateVideoResponse", + "VideoAnnotationProgress", + "AnnotateVideoProgress", + "SpeechTranscriptionConfig", + "SpeechContext", + "SpeechTranscription", + "SpeechRecognitionAlternative", + "WordInfo", + "NormalizedVertex", + "NormalizedBoundingPoly", + "TextSegment", + "TextFrame", + "TextAnnotation", + "ObjectTrackingFrame", + "ObjectTrackingAnnotation", + "LogoRecognitionAnnotation", +) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/types/video_intelligence.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/types/video_intelligence.py new file mode 100644 index 000000000000..0004b15c4286 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1/types/video_intelligence.py @@ -0,0 +1,1344 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import proto # type: ignore + + +from google.protobuf import duration_pb2 as duration # type: ignore +from google.protobuf import timestamp_pb2 as timestamp # type: ignore +from google.rpc import status_pb2 as status # type: ignore + + +__protobuf__ = proto.module( + package="google.cloud.videointelligence.v1", + manifest={ + "Feature", + "LabelDetectionMode", + "Likelihood", + "AnnotateVideoRequest", + "VideoContext", + "LabelDetectionConfig", + "ShotChangeDetectionConfig", + "ObjectTrackingConfig", + "FaceDetectionConfig", + "PersonDetectionConfig", + "ExplicitContentDetectionConfig", + "TextDetectionConfig", + "VideoSegment", + "LabelSegment", + "LabelFrame", + "Entity", + "LabelAnnotation", + "ExplicitContentFrame", + "ExplicitContentAnnotation", + "NormalizedBoundingBox", + "FaceDetectionAnnotation", + "PersonDetectionAnnotation", + "FaceSegment", + "FaceFrame", + "FaceAnnotation", + "TimestampedObject", + "Track", + "DetectedAttribute", + "DetectedLandmark", + "VideoAnnotationResults", + "AnnotateVideoResponse", + "VideoAnnotationProgress", + "AnnotateVideoProgress", + "SpeechTranscriptionConfig", + "SpeechContext", + "SpeechTranscription", + "SpeechRecognitionAlternative", + "WordInfo", + "NormalizedVertex", + "NormalizedBoundingPoly", + "TextSegment", + "TextFrame", + "TextAnnotation", + "ObjectTrackingFrame", + "ObjectTrackingAnnotation", + "LogoRecognitionAnnotation", + }, +) + + +class Feature(proto.Enum): + r"""Video annotation feature.""" + FEATURE_UNSPECIFIED = 0 + LABEL_DETECTION = 1 + SHOT_CHANGE_DETECTION = 2 + EXPLICIT_CONTENT_DETECTION = 3 + FACE_DETECTION = 4 + SPEECH_TRANSCRIPTION = 6 + TEXT_DETECTION = 7 + OBJECT_TRACKING = 9 + LOGO_RECOGNITION = 12 + PERSON_DETECTION = 14 + + +class LabelDetectionMode(proto.Enum): + r"""Label detection mode.""" + LABEL_DETECTION_MODE_UNSPECIFIED = 0 + SHOT_MODE = 1 + FRAME_MODE = 2 + SHOT_AND_FRAME_MODE = 3 + + +class Likelihood(proto.Enum): + r"""Bucketized representation of likelihood.""" + LIKELIHOOD_UNSPECIFIED = 0 + VERY_UNLIKELY = 1 + UNLIKELY = 2 + POSSIBLE = 3 + LIKELY = 4 + VERY_LIKELY = 5 + + +class AnnotateVideoRequest(proto.Message): + r"""Video annotation request. + + Attributes: + input_uri (str): + Input video location. Currently, only `Cloud + Storage `__ URIs are + supported. URIs must be specified in the following format: + ``gs://bucket-id/object-id`` (other URI formats return + [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). + For more information, see `Request + URIs `__. + To identify multiple videos, a video URI may include + wildcards in the ``object-id``. Supported wildcards: '*' to + match 0 or more characters; '?' to match 1 character. If + unset, the input video should be embedded in the request as + ``input_content``. If set, ``input_content`` must be unset. + input_content (bytes): + The video data bytes. If unset, the input video(s) should be + specified via the ``input_uri``. If set, ``input_uri`` must + be unset. + features (Sequence[~.video_intelligence.Feature]): + Required. Requested video annotation + features. + video_context (~.video_intelligence.VideoContext): + Additional video context and/or feature- + pecific parameters. + output_uri (str): + Optional. Location where the output (in JSON format) should + be stored. Currently, only `Cloud + Storage `__ URIs are + supported. These must be specified in the following format: + ``gs://bucket-id/object-id`` (other URI formats return + [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). + For more information, see `Request + URIs `__. + location_id (str): + Optional. Cloud region where annotation should take place. + Supported cloud regions are: ``us-east1``, ``us-west1``, + ``europe-west1``, ``asia-east1``. If no region is specified, + the region will be determined based on video file location. + """ + + input_uri = proto.Field(proto.STRING, number=1) + + input_content = proto.Field(proto.BYTES, number=6) + + features = proto.RepeatedField(proto.ENUM, number=2, enum="Feature",) + + video_context = proto.Field(proto.MESSAGE, number=3, message="VideoContext",) + + output_uri = proto.Field(proto.STRING, number=4) + + location_id = proto.Field(proto.STRING, number=5) + + +class VideoContext(proto.Message): + r"""Video context and/or feature-specific parameters. + + Attributes: + segments (Sequence[~.video_intelligence.VideoSegment]): + Video segments to annotate. The segments may + overlap and are not required to be contiguous or + span the whole video. If unspecified, each video + is treated as a single segment. + label_detection_config (~.video_intelligence.LabelDetectionConfig): + Config for LABEL_DETECTION. + shot_change_detection_config (~.video_intelligence.ShotChangeDetectionConfig): + Config for SHOT_CHANGE_DETECTION. + explicit_content_detection_config (~.video_intelligence.ExplicitContentDetectionConfig): + Config for EXPLICIT_CONTENT_DETECTION. + face_detection_config (~.video_intelligence.FaceDetectionConfig): + Config for FACE_DETECTION. + speech_transcription_config (~.video_intelligence.SpeechTranscriptionConfig): + Config for SPEECH_TRANSCRIPTION. + text_detection_config (~.video_intelligence.TextDetectionConfig): + Config for TEXT_DETECTION. + person_detection_config (~.video_intelligence.PersonDetectionConfig): + Config for PERSON_DETECTION. + object_tracking_config (~.video_intelligence.ObjectTrackingConfig): + Config for OBJECT_TRACKING. + """ + + segments = proto.RepeatedField(proto.MESSAGE, number=1, message="VideoSegment",) + + label_detection_config = proto.Field( + proto.MESSAGE, number=2, message="LabelDetectionConfig", + ) + + shot_change_detection_config = proto.Field( + proto.MESSAGE, number=3, message="ShotChangeDetectionConfig", + ) + + explicit_content_detection_config = proto.Field( + proto.MESSAGE, number=4, message="ExplicitContentDetectionConfig", + ) + + face_detection_config = proto.Field( + proto.MESSAGE, number=5, message="FaceDetectionConfig", + ) + + speech_transcription_config = proto.Field( + proto.MESSAGE, number=6, message="SpeechTranscriptionConfig", + ) + + text_detection_config = proto.Field( + proto.MESSAGE, number=8, message="TextDetectionConfig", + ) + + person_detection_config = proto.Field( + proto.MESSAGE, number=11, message="PersonDetectionConfig", + ) + + object_tracking_config = proto.Field( + proto.MESSAGE, number=13, message="ObjectTrackingConfig", + ) + + +class LabelDetectionConfig(proto.Message): + r"""Config for LABEL_DETECTION. + + Attributes: + label_detection_mode (~.video_intelligence.LabelDetectionMode): + What labels should be detected with LABEL_DETECTION, in + addition to video-level labels or segment-level labels. If + unspecified, defaults to ``SHOT_MODE``. + stationary_camera (bool): + Whether the video has been shot from a stationary (i.e., + non-moving) camera. When set to true, might improve + detection accuracy for moving objects. Should be used with + ``SHOT_AND_FRAME_MODE`` enabled. + model (str): + Model to use for label detection. + Supported values: "builtin/stable" (the default + if unset) and "builtin/latest". + frame_confidence_threshold (float): + The confidence threshold we perform filtering on the labels + from frame-level detection. If not set, it is set to 0.4 by + default. The valid range for this threshold is [0.1, 0.9]. + Any value set outside of this range will be clipped. Note: + For best results, follow the default threshold. We will + update the default threshold everytime when we release a new + model. + video_confidence_threshold (float): + The confidence threshold we perform filtering on the labels + from video-level and shot-level detections. If not set, it's + set to 0.3 by default. The valid range for this threshold is + [0.1, 0.9]. Any value set outside of this range will be + clipped. Note: For best results, follow the default + threshold. We will update the default threshold everytime + when we release a new model. + """ + + label_detection_mode = proto.Field(proto.ENUM, number=1, enum="LabelDetectionMode",) + + stationary_camera = proto.Field(proto.BOOL, number=2) + + model = proto.Field(proto.STRING, number=3) + + frame_confidence_threshold = proto.Field(proto.FLOAT, number=4) + + video_confidence_threshold = proto.Field(proto.FLOAT, number=5) + + +class ShotChangeDetectionConfig(proto.Message): + r"""Config for SHOT_CHANGE_DETECTION. + + Attributes: + model (str): + Model to use for shot change detection. + Supported values: "builtin/stable" (the default + if unset) and "builtin/latest". + """ + + model = proto.Field(proto.STRING, number=1) + + +class ObjectTrackingConfig(proto.Message): + r"""Config for OBJECT_TRACKING. + + Attributes: + model (str): + Model to use for object tracking. + Supported values: "builtin/stable" (the default + if unset) and "builtin/latest". + """ + + model = proto.Field(proto.STRING, number=1) + + +class FaceDetectionConfig(proto.Message): + r"""Config for FACE_DETECTION. + + Attributes: + model (str): + Model to use for face detection. + Supported values: "builtin/stable" (the default + if unset) and "builtin/latest". + include_bounding_boxes (bool): + Whether bounding boxes are included in the + face annotation output. + include_attributes (bool): + Whether to enable face attributes detection, such as + glasses, dark_glasses, mouth_open etc. Ignored if + 'include_bounding_boxes' is set to false. + """ + + model = proto.Field(proto.STRING, number=1) + + include_bounding_boxes = proto.Field(proto.BOOL, number=2) + + include_attributes = proto.Field(proto.BOOL, number=5) + + +class PersonDetectionConfig(proto.Message): + r"""Config for PERSON_DETECTION. + + Attributes: + include_bounding_boxes (bool): + Whether bounding boxes are included in the + person detection annotation output. + include_pose_landmarks (bool): + Whether to enable pose landmarks detection. Ignored if + 'include_bounding_boxes' is set to false. + include_attributes (bool): + Whether to enable person attributes detection, such as cloth + color (black, blue, etc), type (coat, dress, etc), pattern + (plain, floral, etc), hair, etc. Ignored if + 'include_bounding_boxes' is set to false. + """ + + include_bounding_boxes = proto.Field(proto.BOOL, number=1) + + include_pose_landmarks = proto.Field(proto.BOOL, number=2) + + include_attributes = proto.Field(proto.BOOL, number=3) + + +class ExplicitContentDetectionConfig(proto.Message): + r"""Config for EXPLICIT_CONTENT_DETECTION. + + Attributes: + model (str): + Model to use for explicit content detection. + Supported values: "builtin/stable" (the default + if unset) and "builtin/latest". + """ + + model = proto.Field(proto.STRING, number=1) + + +class TextDetectionConfig(proto.Message): + r"""Config for TEXT_DETECTION. + + Attributes: + language_hints (Sequence[str]): + Language hint can be specified if the + language to be detected is known a priori. It + can increase the accuracy of the detection. + Language hint must be language code in BCP-47 + format. + + Automatic language detection is performed if no + hint is provided. + model (str): + Model to use for text detection. + Supported values: "builtin/stable" (the default + if unset) and "builtin/latest". + """ + + language_hints = proto.RepeatedField(proto.STRING, number=1) + + model = proto.Field(proto.STRING, number=2) + + +class VideoSegment(proto.Message): + r"""Video segment. + + Attributes: + start_time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the start of the segment + (inclusive). + end_time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the end of the segment + (inclusive). + """ + + start_time_offset = proto.Field(proto.MESSAGE, number=1, message=duration.Duration,) + + end_time_offset = proto.Field(proto.MESSAGE, number=2, message=duration.Duration,) + + +class LabelSegment(proto.Message): + r"""Video segment level annotation results for label detection. + + Attributes: + segment (~.video_intelligence.VideoSegment): + Video segment where a label was detected. + confidence (float): + Confidence that the label is accurate. Range: [0, 1]. + """ + + segment = proto.Field(proto.MESSAGE, number=1, message="VideoSegment",) + + confidence = proto.Field(proto.FLOAT, number=2) + + +class LabelFrame(proto.Message): + r"""Video frame level annotation results for label detection. + + Attributes: + time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the video frame for this + location. + confidence (float): + Confidence that the label is accurate. Range: [0, 1]. + """ + + time_offset = proto.Field(proto.MESSAGE, number=1, message=duration.Duration,) + + confidence = proto.Field(proto.FLOAT, number=2) + + +class Entity(proto.Message): + r"""Detected entity from video analysis. + + Attributes: + entity_id (str): + Opaque entity ID. Some IDs may be available in `Google + Knowledge Graph Search + API `__. + description (str): + Textual description, e.g., ``Fixed-gear bicycle``. + language_code (str): + Language code for ``description`` in BCP-47 format. + """ + + entity_id = proto.Field(proto.STRING, number=1) + + description = proto.Field(proto.STRING, number=2) + + language_code = proto.Field(proto.STRING, number=3) + + +class LabelAnnotation(proto.Message): + r"""Label annotation. + + Attributes: + entity (~.video_intelligence.Entity): + Detected entity. + category_entities (Sequence[~.video_intelligence.Entity]): + Common categories for the detected entity. For example, when + the label is ``Terrier``, the category is likely ``dog``. + And in some cases there might be more than one categories + e.g., ``Terrier`` could also be a ``pet``. + segments (Sequence[~.video_intelligence.LabelSegment]): + All video segments where a label was + detected. + frames (Sequence[~.video_intelligence.LabelFrame]): + All video frames where a label was detected. + version (str): + Feature version. + """ + + entity = proto.Field(proto.MESSAGE, number=1, message="Entity",) + + category_entities = proto.RepeatedField(proto.MESSAGE, number=2, message="Entity",) + + segments = proto.RepeatedField(proto.MESSAGE, number=3, message="LabelSegment",) + + frames = proto.RepeatedField(proto.MESSAGE, number=4, message="LabelFrame",) + + version = proto.Field(proto.STRING, number=5) + + +class ExplicitContentFrame(proto.Message): + r"""Video frame level annotation results for explicit content. + + Attributes: + time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the video frame for this + location. + pornography_likelihood (~.video_intelligence.Likelihood): + Likelihood of the pornography content.. + """ + + time_offset = proto.Field(proto.MESSAGE, number=1, message=duration.Duration,) + + pornography_likelihood = proto.Field(proto.ENUM, number=2, enum="Likelihood",) + + +class ExplicitContentAnnotation(proto.Message): + r"""Explicit content annotation (based on per-frame visual + signals only). If no explicit content has been detected in a + frame, no annotations are present for that frame. + + Attributes: + frames (Sequence[~.video_intelligence.ExplicitContentFrame]): + All video frames where explicit content was + detected. + version (str): + Feature version. + """ + + frames = proto.RepeatedField( + proto.MESSAGE, number=1, message="ExplicitContentFrame", + ) + + version = proto.Field(proto.STRING, number=2) + + +class NormalizedBoundingBox(proto.Message): + r"""Normalized bounding box. The normalized vertex coordinates are + relative to the original image. Range: [0, 1]. + + Attributes: + left (float): + Left X coordinate. + top (float): + Top Y coordinate. + right (float): + Right X coordinate. + bottom (float): + Bottom Y coordinate. + """ + + left = proto.Field(proto.FLOAT, number=1) + + top = proto.Field(proto.FLOAT, number=2) + + right = proto.Field(proto.FLOAT, number=3) + + bottom = proto.Field(proto.FLOAT, number=4) + + +class FaceDetectionAnnotation(proto.Message): + r"""Face detection annotation. + + Attributes: + version (str): + Feature version. + """ + + version = proto.Field(proto.STRING, number=5) + + +class PersonDetectionAnnotation(proto.Message): + r"""Person detection annotation per video. + + Attributes: + tracks (Sequence[~.video_intelligence.Track]): + The detected tracks of a person. + version (str): + Feature version. + """ + + tracks = proto.RepeatedField(proto.MESSAGE, number=1, message="Track",) + + version = proto.Field(proto.STRING, number=2) + + +class FaceSegment(proto.Message): + r"""Video segment level annotation results for face detection. + + Attributes: + segment (~.video_intelligence.VideoSegment): + Video segment where a face was detected. + """ + + segment = proto.Field(proto.MESSAGE, number=1, message="VideoSegment",) + + +class FaceFrame(proto.Message): + r"""Deprecated. No effect. + + Attributes: + normalized_bounding_boxes (Sequence[~.video_intelligence.NormalizedBoundingBox]): + Normalized Bounding boxes in a frame. + There can be more than one boxes if the same + face is detected in multiple locations within + the current frame. + time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the video frame for this + location. + """ + + normalized_bounding_boxes = proto.RepeatedField( + proto.MESSAGE, number=1, message="NormalizedBoundingBox", + ) + + time_offset = proto.Field(proto.MESSAGE, number=2, message=duration.Duration,) + + +class FaceAnnotation(proto.Message): + r"""Deprecated. No effect. + + Attributes: + thumbnail (bytes): + Thumbnail of a representative face view (in + JPEG format). + segments (Sequence[~.video_intelligence.FaceSegment]): + All video segments where a face was detected. + frames (Sequence[~.video_intelligence.FaceFrame]): + All video frames where a face was detected. + """ + + thumbnail = proto.Field(proto.BYTES, number=1) + + segments = proto.RepeatedField(proto.MESSAGE, number=2, message="FaceSegment",) + + frames = proto.RepeatedField(proto.MESSAGE, number=3, message="FaceFrame",) + + +class TimestampedObject(proto.Message): + r"""For tracking related features. An object at time_offset with + attributes, and located with normalized_bounding_box. + + Attributes: + normalized_bounding_box (~.video_intelligence.NormalizedBoundingBox): + Normalized Bounding box in a frame, where the + object is located. + time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the video frame for this + object. + attributes (Sequence[~.video_intelligence.DetectedAttribute]): + Optional. The attributes of the object in the + bounding box. + landmarks (Sequence[~.video_intelligence.DetectedLandmark]): + Optional. The detected landmarks. + """ + + normalized_bounding_box = proto.Field( + proto.MESSAGE, number=1, message="NormalizedBoundingBox", + ) + + time_offset = proto.Field(proto.MESSAGE, number=2, message=duration.Duration,) + + attributes = proto.RepeatedField( + proto.MESSAGE, number=3, message="DetectedAttribute", + ) + + landmarks = proto.RepeatedField( + proto.MESSAGE, number=4, message="DetectedLandmark", + ) + + +class Track(proto.Message): + r"""A track of an object instance. + + Attributes: + segment (~.video_intelligence.VideoSegment): + Video segment of a track. + timestamped_objects (Sequence[~.video_intelligence.TimestampedObject]): + The object with timestamp and attributes per + frame in the track. + attributes (Sequence[~.video_intelligence.DetectedAttribute]): + Optional. Attributes in the track level. + confidence (float): + Optional. The confidence score of the tracked + object. + """ + + segment = proto.Field(proto.MESSAGE, number=1, message="VideoSegment",) + + timestamped_objects = proto.RepeatedField( + proto.MESSAGE, number=2, message="TimestampedObject", + ) + + attributes = proto.RepeatedField( + proto.MESSAGE, number=3, message="DetectedAttribute", + ) + + confidence = proto.Field(proto.FLOAT, number=4) + + +class DetectedAttribute(proto.Message): + r"""A generic detected attribute represented by name in string + format. + + Attributes: + name (str): + The name of the attribute, for example, glasses, + dark_glasses, mouth_open. A full list of supported type + names will be provided in the document. + confidence (float): + Detected attribute confidence. Range [0, 1]. + value (str): + Text value of the detection result. For + example, the value for "HairColor" can be + "black", "blonde", etc. + """ + + name = proto.Field(proto.STRING, number=1) + + confidence = proto.Field(proto.FLOAT, number=2) + + value = proto.Field(proto.STRING, number=3) + + +class DetectedLandmark(proto.Message): + r"""A generic detected landmark represented by name in string + format and a 2D location. + + Attributes: + name (str): + The name of this landmark, for example, left_hand, + right_shoulder. + point (~.video_intelligence.NormalizedVertex): + The 2D point of the detected landmark using + the normalized image coordindate system. The + normalized coordinates have the range from 0 to + 1. + confidence (float): + The confidence score of the detected landmark. Range [0, 1]. + """ + + name = proto.Field(proto.STRING, number=1) + + point = proto.Field(proto.MESSAGE, number=2, message="NormalizedVertex",) + + confidence = proto.Field(proto.FLOAT, number=3) + + +class VideoAnnotationResults(proto.Message): + r"""Annotation results for a single video. + + Attributes: + input_uri (str): + Video file location in `Cloud + Storage `__. + segment (~.video_intelligence.VideoSegment): + Video segment on which the annotation is run. + segment_label_annotations (Sequence[~.video_intelligence.LabelAnnotation]): + Topical label annotations on video level or + user-specified segment level. There is exactly + one element for each unique label. + segment_presence_label_annotations (Sequence[~.video_intelligence.LabelAnnotation]): + Presence label annotations on video level or user-specified + segment level. There is exactly one element for each unique + label. Compared to the existing topical + ``segment_label_annotations``, this field presents more + fine-grained, segment-level labels detected in video content + and is made available only when the client sets + ``LabelDetectionConfig.model`` to "builtin/latest" in the + request. + shot_label_annotations (Sequence[~.video_intelligence.LabelAnnotation]): + Topical label annotations on shot level. + There is exactly one element for each unique + label. + shot_presence_label_annotations (Sequence[~.video_intelligence.LabelAnnotation]): + Presence label annotations on shot level. There is exactly + one element for each unique label. Compared to the existing + topical ``shot_label_annotations``, this field presents more + fine-grained, shot-level labels detected in video content + and is made available only when the client sets + ``LabelDetectionConfig.model`` to "builtin/latest" in the + request. + frame_label_annotations (Sequence[~.video_intelligence.LabelAnnotation]): + Label annotations on frame level. + There is exactly one element for each unique + label. + face_annotations (Sequence[~.video_intelligence.FaceAnnotation]): + Deprecated. Please use ``face_detection_annotations`` + instead. + face_detection_annotations (Sequence[~.video_intelligence.FaceDetectionAnnotation]): + Face detection annotations. + shot_annotations (Sequence[~.video_intelligence.VideoSegment]): + Shot annotations. Each shot is represented as + a video segment. + explicit_annotation (~.video_intelligence.ExplicitContentAnnotation): + Explicit content annotation. + speech_transcriptions (Sequence[~.video_intelligence.SpeechTranscription]): + Speech transcription. + text_annotations (Sequence[~.video_intelligence.TextAnnotation]): + OCR text detection and tracking. + Annotations for list of detected text snippets. + Each will have list of frame information + associated with it. + object_annotations (Sequence[~.video_intelligence.ObjectTrackingAnnotation]): + Annotations for list of objects detected and + tracked in video. + logo_recognition_annotations (Sequence[~.video_intelligence.LogoRecognitionAnnotation]): + Annotations for list of logos detected, + tracked and recognized in video. + person_detection_annotations (Sequence[~.video_intelligence.PersonDetectionAnnotation]): + Person detection annotations. + error (~.status.Status): + If set, indicates an error. Note that for a single + ``AnnotateVideoRequest`` some videos may succeed and some + may fail. + """ + + input_uri = proto.Field(proto.STRING, number=1) + + segment = proto.Field(proto.MESSAGE, number=10, message="VideoSegment",) + + segment_label_annotations = proto.RepeatedField( + proto.MESSAGE, number=2, message="LabelAnnotation", + ) + + segment_presence_label_annotations = proto.RepeatedField( + proto.MESSAGE, number=23, message="LabelAnnotation", + ) + + shot_label_annotations = proto.RepeatedField( + proto.MESSAGE, number=3, message="LabelAnnotation", + ) + + shot_presence_label_annotations = proto.RepeatedField( + proto.MESSAGE, number=24, message="LabelAnnotation", + ) + + frame_label_annotations = proto.RepeatedField( + proto.MESSAGE, number=4, message="LabelAnnotation", + ) + + face_annotations = proto.RepeatedField( + proto.MESSAGE, number=5, message="FaceAnnotation", + ) + + face_detection_annotations = proto.RepeatedField( + proto.MESSAGE, number=13, message="FaceDetectionAnnotation", + ) + + shot_annotations = proto.RepeatedField( + proto.MESSAGE, number=6, message="VideoSegment", + ) + + explicit_annotation = proto.Field( + proto.MESSAGE, number=7, message="ExplicitContentAnnotation", + ) + + speech_transcriptions = proto.RepeatedField( + proto.MESSAGE, number=11, message="SpeechTranscription", + ) + + text_annotations = proto.RepeatedField( + proto.MESSAGE, number=12, message="TextAnnotation", + ) + + object_annotations = proto.RepeatedField( + proto.MESSAGE, number=14, message="ObjectTrackingAnnotation", + ) + + logo_recognition_annotations = proto.RepeatedField( + proto.MESSAGE, number=19, message="LogoRecognitionAnnotation", + ) + + person_detection_annotations = proto.RepeatedField( + proto.MESSAGE, number=20, message="PersonDetectionAnnotation", + ) + + error = proto.Field(proto.MESSAGE, number=9, message=status.Status,) + + +class AnnotateVideoResponse(proto.Message): + r"""Video annotation response. Included in the ``response`` field of the + ``Operation`` returned by the ``GetOperation`` call of the + ``google::longrunning::Operations`` service. + + Attributes: + annotation_results (Sequence[~.video_intelligence.VideoAnnotationResults]): + Annotation results for all videos specified in + ``AnnotateVideoRequest``. + """ + + annotation_results = proto.RepeatedField( + proto.MESSAGE, number=1, message="VideoAnnotationResults", + ) + + +class VideoAnnotationProgress(proto.Message): + r"""Annotation progress for a single video. + + Attributes: + input_uri (str): + Video file location in `Cloud + Storage `__. + progress_percent (int): + Approximate percentage processed thus far. + Guaranteed to be 100 when fully processed. + start_time (~.timestamp.Timestamp): + Time when the request was received. + update_time (~.timestamp.Timestamp): + Time of the most recent update. + feature (~.video_intelligence.Feature): + Specifies which feature is being tracked if + the request contains more than one feature. + segment (~.video_intelligence.VideoSegment): + Specifies which segment is being tracked if + the request contains more than one segment. + """ + + input_uri = proto.Field(proto.STRING, number=1) + + progress_percent = proto.Field(proto.INT32, number=2) + + start_time = proto.Field(proto.MESSAGE, number=3, message=timestamp.Timestamp,) + + update_time = proto.Field(proto.MESSAGE, number=4, message=timestamp.Timestamp,) + + feature = proto.Field(proto.ENUM, number=5, enum="Feature",) + + segment = proto.Field(proto.MESSAGE, number=6, message="VideoSegment",) + + +class AnnotateVideoProgress(proto.Message): + r"""Video annotation progress. Included in the ``metadata`` field of the + ``Operation`` returned by the ``GetOperation`` call of the + ``google::longrunning::Operations`` service. + + Attributes: + annotation_progress (Sequence[~.video_intelligence.VideoAnnotationProgress]): + Progress metadata for all videos specified in + ``AnnotateVideoRequest``. + """ + + annotation_progress = proto.RepeatedField( + proto.MESSAGE, number=1, message="VideoAnnotationProgress", + ) + + +class SpeechTranscriptionConfig(proto.Message): + r"""Config for SPEECH_TRANSCRIPTION. + + Attributes: + language_code (str): + Required. *Required* The language of the supplied audio as a + `BCP-47 `__ + language tag. Example: "en-US". See `Language + Support `__ + for a list of the currently supported language codes. + max_alternatives (int): + Optional. Maximum number of recognition hypotheses to be + returned. Specifically, the maximum number of + ``SpeechRecognitionAlternative`` messages within each + ``SpeechTranscription``. The server may return fewer than + ``max_alternatives``. Valid values are ``0``-``30``. A value + of ``0`` or ``1`` will return a maximum of one. If omitted, + will return a maximum of one. + filter_profanity (bool): + Optional. If set to ``true``, the server will attempt to + filter out profanities, replacing all but the initial + character in each filtered word with asterisks, e.g. "f***". + If set to ``false`` or omitted, profanities won't be + filtered out. + speech_contexts (Sequence[~.video_intelligence.SpeechContext]): + Optional. A means to provide context to + assist the speech recognition. + enable_automatic_punctuation (bool): + Optional. If 'true', adds punctuation to + recognition result hypotheses. This feature is + only available in select languages. Setting this + for requests in other languages has no effect at + all. The default 'false' value does not add + punctuation to result hypotheses. NOTE: "This is + currently offered as an experimental service, + complimentary to all users. In the future this + may be exclusively available as a premium + feature.". + audio_tracks (Sequence[int]): + Optional. For file formats, such as MXF or + MKV, supporting multiple audio tracks, specify + up to two tracks. Default: track 0. + enable_speaker_diarization (bool): + Optional. If 'true', enables speaker detection for each + recognized word in the top alternative of the recognition + result using a speaker_tag provided in the WordInfo. Note: + When this is true, we send all the words from the beginning + of the audio for the top alternative in every consecutive + response. This is done in order to improve our speaker tags + as our models learn to identify the speakers in the + conversation over time. + diarization_speaker_count (int): + Optional. If set, specifies the estimated number of speakers + in the conversation. If not set, defaults to '2'. Ignored + unless enable_speaker_diarization is set to true. + enable_word_confidence (bool): + Optional. If ``true``, the top result includes a list of + words and the confidence for those words. If ``false``, no + word-level confidence information is returned. The default + is ``false``. + """ + + language_code = proto.Field(proto.STRING, number=1) + + max_alternatives = proto.Field(proto.INT32, number=2) + + filter_profanity = proto.Field(proto.BOOL, number=3) + + speech_contexts = proto.RepeatedField( + proto.MESSAGE, number=4, message="SpeechContext", + ) + + enable_automatic_punctuation = proto.Field(proto.BOOL, number=5) + + audio_tracks = proto.RepeatedField(proto.INT32, number=6) + + enable_speaker_diarization = proto.Field(proto.BOOL, number=7) + + diarization_speaker_count = proto.Field(proto.INT32, number=8) + + enable_word_confidence = proto.Field(proto.BOOL, number=9) + + +class SpeechContext(proto.Message): + r"""Provides "hints" to the speech recognizer to favor specific + words and phrases in the results. + + Attributes: + phrases (Sequence[str]): + Optional. A list of strings containing words and phrases + "hints" so that the speech recognition is more likely to + recognize them. This can be used to improve the accuracy for + specific words and phrases, for example, if specific + commands are typically spoken by the user. This can also be + used to add additional words to the vocabulary of the + recognizer. See `usage + limits `__. + """ + + phrases = proto.RepeatedField(proto.STRING, number=1) + + +class SpeechTranscription(proto.Message): + r"""A speech recognition result corresponding to a portion of the + audio. + + Attributes: + alternatives (Sequence[~.video_intelligence.SpeechRecognitionAlternative]): + May contain one or more recognition hypotheses (up to the + maximum specified in ``max_alternatives``). These + alternatives are ordered in terms of accuracy, with the top + (first) alternative being the most probable, as ranked by + the recognizer. + language_code (str): + Output only. The + `BCP-47 `__ + language tag of the language in this result. This language + code was detected to have the most likelihood of being + spoken in the audio. + """ + + alternatives = proto.RepeatedField( + proto.MESSAGE, number=1, message="SpeechRecognitionAlternative", + ) + + language_code = proto.Field(proto.STRING, number=2) + + +class SpeechRecognitionAlternative(proto.Message): + r"""Alternative hypotheses (a.k.a. n-best list). + + Attributes: + transcript (str): + Transcript text representing the words that + the user spoke. + confidence (float): + Output only. The confidence estimate between 0.0 and 1.0. A + higher number indicates an estimated greater likelihood that + the recognized words are correct. This field is set only for + the top alternative. This field is not guaranteed to be + accurate and users should not rely on it to be always + provided. The default of 0.0 is a sentinel value indicating + ``confidence`` was not set. + words (Sequence[~.video_intelligence.WordInfo]): + Output only. A list of word-specific information for each + recognized word. Note: When ``enable_speaker_diarization`` + is set to true, you will see all the words from the + beginning of the audio. + """ + + transcript = proto.Field(proto.STRING, number=1) + + confidence = proto.Field(proto.FLOAT, number=2) + + words = proto.RepeatedField(proto.MESSAGE, number=3, message="WordInfo",) + + +class WordInfo(proto.Message): + r"""Word-specific information for recognized words. Word information is + only included in the response when certain request parameters are + set, such as ``enable_word_time_offsets``. + + Attributes: + start_time (~.duration.Duration): + Time offset relative to the beginning of the audio, and + corresponding to the start of the spoken word. This field is + only set if ``enable_word_time_offsets=true`` and only in + the top hypothesis. This is an experimental feature and the + accuracy of the time offset can vary. + end_time (~.duration.Duration): + Time offset relative to the beginning of the audio, and + corresponding to the end of the spoken word. This field is + only set if ``enable_word_time_offsets=true`` and only in + the top hypothesis. This is an experimental feature and the + accuracy of the time offset can vary. + word (str): + The word corresponding to this set of + information. + confidence (float): + Output only. The confidence estimate between 0.0 and 1.0. A + higher number indicates an estimated greater likelihood that + the recognized words are correct. This field is set only for + the top alternative. This field is not guaranteed to be + accurate and users should not rely on it to be always + provided. The default of 0.0 is a sentinel value indicating + ``confidence`` was not set. + speaker_tag (int): + Output only. A distinct integer value is assigned for every + speaker within the audio. This field specifies which one of + those speakers was detected to have spoken this word. Value + ranges from 1 up to diarization_speaker_count, and is only + set if speaker diarization is enabled. + """ + + start_time = proto.Field(proto.MESSAGE, number=1, message=duration.Duration,) + + end_time = proto.Field(proto.MESSAGE, number=2, message=duration.Duration,) + + word = proto.Field(proto.STRING, number=3) + + confidence = proto.Field(proto.FLOAT, number=4) + + speaker_tag = proto.Field(proto.INT32, number=5) + + +class NormalizedVertex(proto.Message): + r"""A vertex represents a 2D point in the image. + NOTE: the normalized vertex coordinates are relative to the + original image and range from 0 to 1. + + Attributes: + x (float): + X coordinate. + y (float): + Y coordinate. + """ + + x = proto.Field(proto.FLOAT, number=1) + + y = proto.Field(proto.FLOAT, number=2) + + +class NormalizedBoundingPoly(proto.Message): + r"""Normalized bounding polygon for text (that might not be aligned with + axis). Contains list of the corner points in clockwise order + starting from top-left corner. For example, for a rectangular + bounding box: When the text is horizontal it might look like: 0----1 + \| \| 3----2 + + When it's clockwise rotated 180 degrees around the top-left corner + it becomes: 2----3 \| \| 1----0 + + and the vertex order will still be (0, 1, 2, 3). Note that values + can be less than 0, or greater than 1 due to trignometric + calculations for location of the box. + + Attributes: + vertices (Sequence[~.video_intelligence.NormalizedVertex]): + Normalized vertices of the bounding polygon. + """ + + vertices = proto.RepeatedField(proto.MESSAGE, number=1, message="NormalizedVertex",) + + +class TextSegment(proto.Message): + r"""Video segment level annotation results for text detection. + + Attributes: + segment (~.video_intelligence.VideoSegment): + Video segment where a text snippet was + detected. + confidence (float): + Confidence for the track of detected text. It + is calculated as the highest over all frames + where OCR detected text appears. + frames (Sequence[~.video_intelligence.TextFrame]): + Information related to the frames where OCR + detected text appears. + """ + + segment = proto.Field(proto.MESSAGE, number=1, message="VideoSegment",) + + confidence = proto.Field(proto.FLOAT, number=2) + + frames = proto.RepeatedField(proto.MESSAGE, number=3, message="TextFrame",) + + +class TextFrame(proto.Message): + r"""Video frame level annotation results for text annotation + (OCR). Contains information regarding timestamp and bounding box + locations for the frames containing detected OCR text snippets. + + Attributes: + rotated_bounding_box (~.video_intelligence.NormalizedBoundingPoly): + Bounding polygon of the detected text for + this frame. + time_offset (~.duration.Duration): + Timestamp of this frame. + """ + + rotated_bounding_box = proto.Field( + proto.MESSAGE, number=1, message="NormalizedBoundingPoly", + ) + + time_offset = proto.Field(proto.MESSAGE, number=2, message=duration.Duration,) + + +class TextAnnotation(proto.Message): + r"""Annotations related to one detected OCR text snippet. This + will contain the corresponding text, confidence value, and frame + level information for each detection. + + Attributes: + text (str): + The detected text. + segments (Sequence[~.video_intelligence.TextSegment]): + All video segments where OCR detected text + appears. + version (str): + Feature version. + """ + + text = proto.Field(proto.STRING, number=1) + + segments = proto.RepeatedField(proto.MESSAGE, number=2, message="TextSegment",) + + version = proto.Field(proto.STRING, number=3) + + +class ObjectTrackingFrame(proto.Message): + r"""Video frame level annotations for object detection and + tracking. This field stores per frame location, time offset, and + confidence. + + Attributes: + normalized_bounding_box (~.video_intelligence.NormalizedBoundingBox): + The normalized bounding box location of this + object track for the frame. + time_offset (~.duration.Duration): + The timestamp of the frame in microseconds. + """ + + normalized_bounding_box = proto.Field( + proto.MESSAGE, number=1, message="NormalizedBoundingBox", + ) + + time_offset = proto.Field(proto.MESSAGE, number=2, message=duration.Duration,) + + +class ObjectTrackingAnnotation(proto.Message): + r"""Annotations corresponding to one tracked object. + + Attributes: + segment (~.video_intelligence.VideoSegment): + Non-streaming batch mode ONLY. + Each object track corresponds to one video + segment where it appears. + track_id (int): + Streaming mode ONLY. In streaming mode, we do not know the + end time of a tracked object before it is completed. Hence, + there is no VideoSegment info returned. Instead, we provide + a unique identifiable integer track_id so that the customers + can correlate the results of the ongoing + ObjectTrackAnnotation of the same track_id over time. + entity (~.video_intelligence.Entity): + Entity to specify the object category that + this track is labeled as. + confidence (float): + Object category's labeling confidence of this + track. + frames (Sequence[~.video_intelligence.ObjectTrackingFrame]): + Information corresponding to all frames where + this object track appears. Non-streaming batch + mode: it may be one or multiple + ObjectTrackingFrame messages in frames. + Streaming mode: it can only be one + ObjectTrackingFrame message in frames. + version (str): + Feature version. + """ + + segment = proto.Field( + proto.MESSAGE, number=3, oneof="track_info", message="VideoSegment", + ) + + track_id = proto.Field(proto.INT64, number=5, oneof="track_info") + + entity = proto.Field(proto.MESSAGE, number=1, message="Entity",) + + confidence = proto.Field(proto.FLOAT, number=4) + + frames = proto.RepeatedField( + proto.MESSAGE, number=2, message="ObjectTrackingFrame", + ) + + version = proto.Field(proto.STRING, number=6) + + +class LogoRecognitionAnnotation(proto.Message): + r"""Annotation corresponding to one detected, tracked and + recognized logo class. + + Attributes: + entity (~.video_intelligence.Entity): + Entity category information to specify the + logo class that all the logo tracks within this + LogoRecognitionAnnotation are recognized as. + tracks (Sequence[~.video_intelligence.Track]): + All logo tracks where the recognized logo + appears. Each track corresponds to one logo + instance appearing in consecutive frames. + segments (Sequence[~.video_intelligence.VideoSegment]): + All video segments where the recognized logo + appears. There might be multiple instances of + the same logo class appearing in one + VideoSegment. + """ + + entity = proto.Field(proto.MESSAGE, number=1, message="Entity",) + + tracks = proto.RepeatedField(proto.MESSAGE, number=2, message="Track",) + + segments = proto.RepeatedField(proto.MESSAGE, number=3, message="VideoSegment",) + + +__all__ = tuple(sorted(__protobuf__.manifest)) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/__init__.py index a8fbff3a8649..af876851715a 100644 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/__init__.py +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/__init__.py @@ -1,49 +1,71 @@ # -*- coding: utf-8 -*- -# + # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # -# https://www.apache.org/licenses/LICENSE-2.0 +# http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. +# - -from __future__ import absolute_import -import sys -import warnings - -from google.cloud.videointelligence_v1beta2 import types -from google.cloud.videointelligence_v1beta2.gapic import enums -from google.cloud.videointelligence_v1beta2.gapic import ( - video_intelligence_service_client, -) - - -if sys.version_info[:2] == (2, 7): - message = ( - "A future version of this library will drop support for Python 2.7. " - "More details about Python 2 support for Google Cloud Client Libraries " - "can be found at https://cloud.google.com/python/docs/python2-sunset/" - ) - warnings.warn(message, DeprecationWarning) - - -class VideoIntelligenceServiceClient( - video_intelligence_service_client.VideoIntelligenceServiceClient -): - __doc__ = video_intelligence_service_client.VideoIntelligenceServiceClient.__doc__ - enums = enums +from .services.video_intelligence_service import VideoIntelligenceServiceClient +from .types.video_intelligence import AnnotateVideoProgress +from .types.video_intelligence import AnnotateVideoRequest +from .types.video_intelligence import AnnotateVideoResponse +from .types.video_intelligence import Entity +from .types.video_intelligence import ExplicitContentAnnotation +from .types.video_intelligence import ExplicitContentDetectionConfig +from .types.video_intelligence import ExplicitContentFrame +from .types.video_intelligence import FaceAnnotation +from .types.video_intelligence import FaceDetectionConfig +from .types.video_intelligence import FaceFrame +from .types.video_intelligence import FaceSegment +from .types.video_intelligence import Feature +from .types.video_intelligence import LabelAnnotation +from .types.video_intelligence import LabelDetectionConfig +from .types.video_intelligence import LabelDetectionMode +from .types.video_intelligence import LabelFrame +from .types.video_intelligence import LabelSegment +from .types.video_intelligence import Likelihood +from .types.video_intelligence import NormalizedBoundingBox +from .types.video_intelligence import ShotChangeDetectionConfig +from .types.video_intelligence import VideoAnnotationProgress +from .types.video_intelligence import VideoAnnotationResults +from .types.video_intelligence import VideoContext +from .types.video_intelligence import VideoSegment __all__ = ( - "enums", - "types", + "AnnotateVideoProgress", + "AnnotateVideoRequest", + "AnnotateVideoResponse", + "Entity", + "ExplicitContentAnnotation", + "ExplicitContentDetectionConfig", + "ExplicitContentFrame", + "FaceAnnotation", + "FaceDetectionConfig", + "FaceFrame", + "FaceSegment", + "Feature", + "LabelAnnotation", + "LabelDetectionConfig", + "LabelDetectionMode", + "LabelFrame", + "LabelSegment", + "Likelihood", + "NormalizedBoundingBox", + "ShotChangeDetectionConfig", + "VideoAnnotationProgress", + "VideoAnnotationResults", + "VideoContext", + "VideoSegment", "VideoIntelligenceServiceClient", ) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/gapic/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/gapic/__init__.py deleted file mode 100644 index e69de29bb2d1..000000000000 diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/gapic/enums.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/gapic/enums.py deleted file mode 100644 index 3f9e01cfbc61..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/gapic/enums.py +++ /dev/null @@ -1,76 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Wrappers for protocol buffer enum types.""" - -import enum - - -class Feature(enum.IntEnum): - """ - Video annotation feature. - - Attributes: - FEATURE_UNSPECIFIED (int): Unspecified. - LABEL_DETECTION (int): Label detection. Detect objects, such as dog or flower. - SHOT_CHANGE_DETECTION (int): Shot change detection. - EXPLICIT_CONTENT_DETECTION (int): Explicit content detection. - FACE_DETECTION (int): Human face detection and tracking. - """ - - FEATURE_UNSPECIFIED = 0 - LABEL_DETECTION = 1 - SHOT_CHANGE_DETECTION = 2 - EXPLICIT_CONTENT_DETECTION = 3 - FACE_DETECTION = 4 - - -class LabelDetectionMode(enum.IntEnum): - """ - Label detection mode. - - Attributes: - LABEL_DETECTION_MODE_UNSPECIFIED (int): Unspecified. - SHOT_MODE (int): Detect shot-level labels. - FRAME_MODE (int): Detect frame-level labels. - SHOT_AND_FRAME_MODE (int): Detect both shot-level and frame-level labels. - """ - - LABEL_DETECTION_MODE_UNSPECIFIED = 0 - SHOT_MODE = 1 - FRAME_MODE = 2 - SHOT_AND_FRAME_MODE = 3 - - -class Likelihood(enum.IntEnum): - """ - Bucketized representation of likelihood. - - Attributes: - LIKELIHOOD_UNSPECIFIED (int): Unspecified likelihood. - VERY_UNLIKELY (int): Very unlikely. - UNLIKELY (int): Unlikely. - POSSIBLE (int): Possible. - LIKELY (int): Likely. - VERY_LIKELY (int): Very likely. - """ - - LIKELIHOOD_UNSPECIFIED = 0 - VERY_UNLIKELY = 1 - UNLIKELY = 2 - POSSIBLE = 3 - LIKELY = 4 - VERY_LIKELY = 5 diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/gapic/transports/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/gapic/transports/__init__.py deleted file mode 100644 index e69de29bb2d1..000000000000 diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/gapic/transports/video_intelligence_service_grpc_transport.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/gapic/transports/video_intelligence_service_grpc_transport.py deleted file mode 100644 index 71b4ab66182b..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/gapic/transports/video_intelligence_service_grpc_transport.py +++ /dev/null @@ -1,137 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import google.api_core.grpc_helpers -import google.api_core.operations_v1 - -from google.cloud.videointelligence_v1beta2.proto import video_intelligence_pb2_grpc - - -class VideoIntelligenceServiceGrpcTransport(object): - """gRPC transport class providing stubs for - google.cloud.videointelligence.v1beta2 VideoIntelligenceService API. - - The transport provides access to the raw gRPC stubs, - which can be used to take advantage of advanced - features of gRPC. - """ - - # The scopes needed to make gRPC calls to all of the methods defined - # in this service. - _OAUTH_SCOPES = ("https://www.googleapis.com/auth/cloud-platform",) - - def __init__( - self, - channel=None, - credentials=None, - address="videointelligence.googleapis.com:443", - ): - """Instantiate the transport class. - - Args: - channel (grpc.Channel): A ``Channel`` instance through - which to make calls. This argument is mutually exclusive - with ``credentials``; providing both will raise an exception. - credentials (google.auth.credentials.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If none - are specified, the client will attempt to ascertain the - credentials from the environment. - address (str): The address where the service is hosted. - """ - # If both `channel` and `credentials` are specified, raise an - # exception (channels come with credentials baked in already). - if channel is not None and credentials is not None: - raise ValueError( - "The `channel` and `credentials` arguments are mutually " "exclusive.", - ) - - # Create the channel. - if channel is None: - channel = self.create_channel( - address=address, - credentials=credentials, - options={ - "grpc.max_send_message_length": -1, - "grpc.max_receive_message_length": -1, - }.items(), - ) - - self._channel = channel - - # gRPC uses objects called "stubs" that are bound to the - # channel and provide a basic method for each RPC. - self._stubs = { - "video_intelligence_service_stub": video_intelligence_pb2_grpc.VideoIntelligenceServiceStub( - channel - ), - } - - # Because this API includes a method that returns a - # long-running operation (proto: google.longrunning.Operation), - # instantiate an LRO client. - self._operations_client = google.api_core.operations_v1.OperationsClient( - channel - ) - - @classmethod - def create_channel( - cls, address="videointelligence.googleapis.com:443", credentials=None, **kwargs - ): - """Create and return a gRPC channel object. - - Args: - address (str): The host for the channel to use. - credentials (~.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If - none are specified, the client will attempt to ascertain - the credentials from the environment. - kwargs (dict): Keyword arguments, which are passed to the - channel creation. - - Returns: - grpc.Channel: A gRPC channel object. - """ - return google.api_core.grpc_helpers.create_channel( - address, credentials=credentials, scopes=cls._OAUTH_SCOPES, **kwargs - ) - - @property - def channel(self): - """The gRPC channel used by the transport. - - Returns: - grpc.Channel: A gRPC channel object. - """ - return self._channel - - @property - def annotate_video(self): - """Return the gRPC stub for :meth:`VideoIntelligenceServiceClient.annotate_video`. - - Performs asynchronous video annotation. Progress and results can be - retrieved through the ``google.longrunning.Operations`` interface. - ``Operation.metadata`` contains ``AnnotateVideoProgress`` (progress). - ``Operation.response`` contains ``AnnotateVideoResponse`` (results). - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["video_intelligence_service_stub"].AnnotateVideo diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/gapic/video_intelligence_service_client.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/gapic/video_intelligence_service_client.py deleted file mode 100644 index a9ab586910ba..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/gapic/video_intelligence_service_client.py +++ /dev/null @@ -1,307 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Accesses the google.cloud.videointelligence.v1beta2 VideoIntelligenceService API.""" - -import pkg_resources -import warnings - -from google.oauth2 import service_account -import google.api_core.client_options -import google.api_core.gapic_v1.client_info -import google.api_core.gapic_v1.config -import google.api_core.gapic_v1.method -import google.api_core.grpc_helpers -import google.api_core.operation -import google.api_core.operations_v1 -import grpc - -from google.cloud.videointelligence_v1beta2.gapic import enums -from google.cloud.videointelligence_v1beta2.gapic import ( - video_intelligence_service_client_config, -) -from google.cloud.videointelligence_v1beta2.gapic.transports import ( - video_intelligence_service_grpc_transport, -) -from google.cloud.videointelligence_v1beta2.proto import video_intelligence_pb2 -from google.cloud.videointelligence_v1beta2.proto import video_intelligence_pb2_grpc -from google.longrunning import operations_pb2 - - -_GAPIC_LIBRARY_VERSION = pkg_resources.get_distribution( - "google-cloud-videointelligence", -).version - - -class VideoIntelligenceServiceClient(object): - """Service that implements Google Cloud Video Intelligence API.""" - - SERVICE_ADDRESS = "videointelligence.googleapis.com:443" - """The default address of the service.""" - - # The name of the interface for this client. This is the key used to - # find the method configuration in the client_config dictionary. - _INTERFACE_NAME = "google.cloud.videointelligence.v1beta2.VideoIntelligenceService" - - @classmethod - def from_service_account_file(cls, filename, *args, **kwargs): - """Creates an instance of this client using the provided credentials - file. - - Args: - filename (str): The path to the service account private key json - file. - args: Additional arguments to pass to the constructor. - kwargs: Additional arguments to pass to the constructor. - - Returns: - VideoIntelligenceServiceClient: The constructed client. - """ - credentials = service_account.Credentials.from_service_account_file(filename) - kwargs["credentials"] = credentials - return cls(*args, **kwargs) - - from_service_account_json = from_service_account_file - - def __init__( - self, - transport=None, - channel=None, - credentials=None, - client_config=None, - client_info=None, - client_options=None, - ): - """Constructor. - - Args: - transport (Union[~.VideoIntelligenceServiceGrpcTransport, - Callable[[~.Credentials, type], ~.VideoIntelligenceServiceGrpcTransport]): A transport - instance, responsible for actually making the API calls. - The default transport uses the gRPC protocol. - This argument may also be a callable which returns a - transport instance. Callables will be sent the credentials - as the first argument and the default transport class as - the second argument. - channel (grpc.Channel): DEPRECATED. A ``Channel`` instance - through which to make calls. This argument is mutually exclusive - with ``credentials``; providing both will raise an exception. - credentials (google.auth.credentials.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If none - are specified, the client will attempt to ascertain the - credentials from the environment. - This argument is mutually exclusive with providing a - transport instance to ``transport``; doing so will raise - an exception. - client_config (dict): DEPRECATED. A dictionary of call options for - each method. If not specified, the default configuration is used. - client_info (google.api_core.gapic_v1.client_info.ClientInfo): - The client info used to send a user-agent string along with - API requests. If ``None``, then default info will be used. - Generally, you only need to set this if you're developing - your own client library. - client_options (Union[dict, google.api_core.client_options.ClientOptions]): - Client options used to set user options on the client. API Endpoint - should be set through client_options. - """ - # Raise deprecation warnings for things we want to go away. - if client_config is not None: - warnings.warn( - "The `client_config` argument is deprecated.", - PendingDeprecationWarning, - stacklevel=2, - ) - else: - client_config = video_intelligence_service_client_config.config - - if channel: - warnings.warn( - "The `channel` argument is deprecated; use " "`transport` instead.", - PendingDeprecationWarning, - stacklevel=2, - ) - - api_endpoint = self.SERVICE_ADDRESS - if client_options: - if type(client_options) == dict: - client_options = google.api_core.client_options.from_dict( - client_options - ) - if client_options.api_endpoint: - api_endpoint = client_options.api_endpoint - - # Instantiate the transport. - # The transport is responsible for handling serialization and - # deserialization and actually sending data to the service. - if transport: - if callable(transport): - self.transport = transport( - credentials=credentials, - default_class=video_intelligence_service_grpc_transport.VideoIntelligenceServiceGrpcTransport, - address=api_endpoint, - ) - else: - if credentials: - raise ValueError( - "Received both a transport instance and " - "credentials; these are mutually exclusive." - ) - self.transport = transport - else: - self.transport = video_intelligence_service_grpc_transport.VideoIntelligenceServiceGrpcTransport( - address=api_endpoint, channel=channel, credentials=credentials, - ) - - if client_info is None: - client_info = google.api_core.gapic_v1.client_info.ClientInfo( - gapic_version=_GAPIC_LIBRARY_VERSION, - ) - else: - client_info.gapic_version = _GAPIC_LIBRARY_VERSION - self._client_info = client_info - - # Parse out the default settings for retry and timeout for each RPC - # from the client configuration. - # (Ordinarily, these are the defaults specified in the `*_config.py` - # file next to this one.) - self._method_configs = google.api_core.gapic_v1.config.parse_method_configs( - client_config["interfaces"][self._INTERFACE_NAME], - ) - - # Save a dictionary of cached API call functions. - # These are the actual callables which invoke the proper - # transport methods, wrapped with `wrap_method` to add retry, - # timeout, and the like. - self._inner_api_calls = {} - - # Service calls - def annotate_video( - self, - features, - input_uri=None, - input_content=None, - video_context=None, - output_uri=None, - location_id=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Performs asynchronous video annotation. Progress and results can be - retrieved through the ``google.longrunning.Operations`` interface. - ``Operation.metadata`` contains ``AnnotateVideoProgress`` (progress). - ``Operation.response`` contains ``AnnotateVideoResponse`` (results). - - Example: - >>> from google.cloud import videointelligence_v1beta2 - >>> from google.cloud.videointelligence_v1beta2 import enums - >>> - >>> client = videointelligence_v1beta2.VideoIntelligenceServiceClient() - >>> - >>> features_element = enums.Feature.LABEL_DETECTION - >>> features = [features_element] - >>> input_uri = 'gs://cloud-samples-data/video/cat.mp4' - >>> - >>> response = client.annotate_video(features, input_uri=input_uri) - >>> - >>> def callback(operation_future): - ... # Handle result. - ... result = operation_future.result() - >>> - >>> response.add_done_callback(callback) - >>> - >>> # Handle metadata. - >>> metadata = response.metadata() - - Args: - features (list[~google.cloud.videointelligence_v1beta2.types.Feature]): Required. Requested video annotation features. - input_uri (str): Input video location. Currently, only `Google Cloud - Storage `__ URIs are supported, which - must be specified in the following format: ``gs://bucket-id/object-id`` - (other URI formats return ``google.rpc.Code.INVALID_ARGUMENT``). For - more information, see `Request - URIs `__. A - video URI may include wildcards in ``object-id``, and thus identify - multiple videos. Supported wildcards: '*' to match 0 or more characters; - '?' to match 1 character. If unset, the input video should be embedded - in the request as ``input_content``. If set, ``input_content`` should be - unset. - input_content (bytes): The video data bytes. If unset, the input video(s) should be - specified via ``input_uri``. If set, ``input_uri`` should be unset. - video_context (Union[dict, ~google.cloud.videointelligence_v1beta2.types.VideoContext]): Additional video context and/or feature-specific parameters. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.videointelligence_v1beta2.types.VideoContext` - output_uri (str): Optional. Location where the output (in JSON format) should be - stored. Currently, only `Google Cloud - Storage `__ URIs are supported, which - must be specified in the following format: ``gs://bucket-id/object-id`` - (other URI formats return ``google.rpc.Code.INVALID_ARGUMENT``). For - more information, see `Request - URIs `__. - location_id (str): Optional. Cloud region where annotation should take place. Supported - cloud regions: ``us-east1``, ``us-west1``, ``europe-west1``, - ``asia-east1``. If no region is specified, a region will be determined - based on video file location. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.videointelligence_v1beta2.types._OperationFuture` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "annotate_video" not in self._inner_api_calls: - self._inner_api_calls[ - "annotate_video" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.annotate_video, - default_retry=self._method_configs["AnnotateVideo"].retry, - default_timeout=self._method_configs["AnnotateVideo"].timeout, - client_info=self._client_info, - ) - - request = video_intelligence_pb2.AnnotateVideoRequest( - features=features, - input_uri=input_uri, - input_content=input_content, - video_context=video_context, - output_uri=output_uri, - location_id=location_id, - ) - operation = self._inner_api_calls["annotate_video"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - return google.api_core.operation.from_gapic( - operation, - self.transport._operations_client, - video_intelligence_pb2.AnnotateVideoResponse, - metadata_type=video_intelligence_pb2.AnnotateVideoProgress, - ) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/gapic/video_intelligence_service_client_config.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/gapic/video_intelligence_service_client_config.py deleted file mode 100644 index 01a2f2ac9085..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/gapic/video_intelligence_service_client_config.py +++ /dev/null @@ -1,37 +0,0 @@ -config = { - "interfaces": { - "google.cloud.videointelligence.v1beta2.VideoIntelligenceService": { - "retry_codes": { - "retry_policy_1_codes": ["UNAVAILABLE", "DEADLINE_EXCEEDED"], - "no_retry_codes": [], - }, - "retry_params": { - "retry_policy_1_params": { - "initial_retry_delay_millis": 1000, - "retry_delay_multiplier": 2.5, - "max_retry_delay_millis": 120000, - "initial_rpc_timeout_millis": 600000, - "rpc_timeout_multiplier": 1.0, - "max_rpc_timeout_millis": 600000, - "total_timeout_millis": 600000, - }, - "no_retry_params": { - "initial_retry_delay_millis": 0, - "retry_delay_multiplier": 0.0, - "max_retry_delay_millis": 0, - "initial_rpc_timeout_millis": 0, - "rpc_timeout_multiplier": 1.0, - "max_rpc_timeout_millis": 0, - "total_timeout_millis": 0, - }, - }, - "methods": { - "AnnotateVideo": { - "timeout_millis": 600000, - "retry_codes_name": "retry_policy_1_codes", - "retry_params_name": "retry_policy_1_params", - } - }, - } - } -} diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/proto/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/proto/__init__.py deleted file mode 100644 index e69de29bb2d1..000000000000 diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/proto/video_intelligence.proto b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/proto/video_intelligence.proto deleted file mode 100644 index 690099751da9..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/proto/video_intelligence.proto +++ /dev/null @@ -1,408 +0,0 @@ -// Copyright 2019 Google LLC. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. -// - -syntax = "proto3"; - -package google.cloud.videointelligence.v1beta2; - -import "google/api/annotations.proto"; -import "google/api/client.proto"; -import "google/api/field_behavior.proto"; -import "google/longrunning/operations.proto"; -import "google/protobuf/duration.proto"; -import "google/protobuf/timestamp.proto"; -import "google/rpc/status.proto"; - -option csharp_namespace = "Google.Cloud.VideoIntelligence.V1Beta2"; -option go_package = "google.golang.org/genproto/googleapis/cloud/videointelligence/v1beta2;videointelligence"; -option java_multiple_files = true; -option java_outer_classname = "VideoIntelligenceServiceProto"; -option java_package = "com.google.cloud.videointelligence.v1beta2"; -option php_namespace = "Google\\Cloud\\VideoIntelligence\\V1beta2"; -option ruby_package = "Google::Cloud::VideoIntelligence::V1beta2"; - -// Service that implements Google Cloud Video Intelligence API. -service VideoIntelligenceService { - option (google.api.default_host) = "videointelligence.googleapis.com"; - option (google.api.oauth_scopes) = - "https://www.googleapis.com/auth/cloud-platform"; - - // Performs asynchronous video annotation. Progress and results can be - // retrieved through the `google.longrunning.Operations` interface. - // `Operation.metadata` contains `AnnotateVideoProgress` (progress). - // `Operation.response` contains `AnnotateVideoResponse` (results). - rpc AnnotateVideo(AnnotateVideoRequest) - returns (google.longrunning.Operation) { - option (google.api.http) = { - post: "/v1beta2/videos:annotate" - body: "*" - }; - option (google.api.method_signature) = "input_uri,features"; - option (google.longrunning.operation_info) = { - response_type: "AnnotateVideoResponse" - metadata_type: "AnnotateVideoProgress" - }; - } -} - -// Video annotation request. -message AnnotateVideoRequest { - // Input video location. Currently, only - // [Google Cloud Storage](https://cloud.google.com/storage/) URIs are - // supported, which must be specified in the following format: - // `gs://bucket-id/object-id` (other URI formats return - // [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For - // more information, see [Request URIs](https://cloud.google.com/storage/docs/request-endpoints). A video - // URI may include wildcards in `object-id`, and thus identify multiple - // videos. Supported wildcards: '*' to match 0 or more characters; - // '?' to match 1 character. If unset, the input video should be embedded - // in the request as `input_content`. If set, `input_content` should be unset. - string input_uri = 1; - - // The video data bytes. - // If unset, the input video(s) should be specified via `input_uri`. - // If set, `input_uri` should be unset. - bytes input_content = 6; - - // Required. Requested video annotation features. - repeated Feature features = 2 [(google.api.field_behavior) = REQUIRED]; - - // Additional video context and/or feature-specific parameters. - VideoContext video_context = 3; - - // Optional. Location where the output (in JSON format) should be stored. - // Currently, only [Google Cloud Storage](https://cloud.google.com/storage/) - // URIs are supported, which must be specified in the following format: - // `gs://bucket-id/object-id` (other URI formats return - // [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For - // more information, see [Request URIs](https://cloud.google.com/storage/docs/request-endpoints). - string output_uri = 4 [(google.api.field_behavior) = OPTIONAL]; - - // Optional. Cloud region where annotation should take place. Supported cloud - // regions: `us-east1`, `us-west1`, `europe-west1`, `asia-east1`. If no region - // is specified, a region will be determined based on video file location. - string location_id = 5 [(google.api.field_behavior) = OPTIONAL]; -} - -// Video context and/or feature-specific parameters. -message VideoContext { - // Video segments to annotate. The segments may overlap and are not required - // to be contiguous or span the whole video. If unspecified, each video is - // treated as a single segment. - repeated VideoSegment segments = 1; - - // Config for LABEL_DETECTION. - LabelDetectionConfig label_detection_config = 2; - - // Config for SHOT_CHANGE_DETECTION. - ShotChangeDetectionConfig shot_change_detection_config = 3; - - // Config for EXPLICIT_CONTENT_DETECTION. - ExplicitContentDetectionConfig explicit_content_detection_config = 4; - - // Config for FACE_DETECTION. - FaceDetectionConfig face_detection_config = 5; -} - -// Config for LABEL_DETECTION. -message LabelDetectionConfig { - // What labels should be detected with LABEL_DETECTION, in addition to - // video-level labels or segment-level labels. - // If unspecified, defaults to `SHOT_MODE`. - LabelDetectionMode label_detection_mode = 1; - - // Whether the video has been shot from a stationary (i.e. non-moving) camera. - // When set to true, might improve detection accuracy for moving objects. - // Should be used with `SHOT_AND_FRAME_MODE` enabled. - bool stationary_camera = 2; - - // Model to use for label detection. - // Supported values: "builtin/stable" (the default if unset) and - // "builtin/latest". - string model = 3; -} - -// Config for SHOT_CHANGE_DETECTION. -message ShotChangeDetectionConfig { - // Model to use for shot change detection. - // Supported values: "builtin/stable" (the default if unset) and - // "builtin/latest". - string model = 1; -} - -// Config for EXPLICIT_CONTENT_DETECTION. -message ExplicitContentDetectionConfig { - // Model to use for explicit content detection. - // Supported values: "builtin/stable" (the default if unset) and - // "builtin/latest". - string model = 1; -} - -// Config for FACE_DETECTION. -message FaceDetectionConfig { - // Model to use for face detection. - // Supported values: "builtin/stable" (the default if unset) and - // "builtin/latest". - string model = 1; - - // Whether bounding boxes be included in the face annotation output. - bool include_bounding_boxes = 2; -} - -// Video segment. -message VideoSegment { - // Time-offset, relative to the beginning of the video, - // corresponding to the start of the segment (inclusive). - google.protobuf.Duration start_time_offset = 1; - - // Time-offset, relative to the beginning of the video, - // corresponding to the end of the segment (inclusive). - google.protobuf.Duration end_time_offset = 2; -} - -// Video segment level annotation results for label detection. -message LabelSegment { - // Video segment where a label was detected. - VideoSegment segment = 1; - - // Confidence that the label is accurate. Range: [0, 1]. - float confidence = 2; -} - -// Video frame level annotation results for label detection. -message LabelFrame { - // Time-offset, relative to the beginning of the video, corresponding to the - // video frame for this location. - google.protobuf.Duration time_offset = 1; - - // Confidence that the label is accurate. Range: [0, 1]. - float confidence = 2; -} - -// Detected entity from video analysis. -message Entity { - // Opaque entity ID. Some IDs may be available in - // [Google Knowledge Graph Search - // API](https://developers.google.com/knowledge-graph/). - string entity_id = 1; - - // Textual description, e.g. `Fixed-gear bicycle`. - string description = 2; - - // Language code for `description` in BCP-47 format. - string language_code = 3; -} - -// Label annotation. -message LabelAnnotation { - // Detected entity. - Entity entity = 1; - - // Common categories for the detected entity. - // E.g. when the label is `Terrier` the category is likely `dog`. And in some - // cases there might be more than one categories e.g. `Terrier` could also be - // a `pet`. - repeated Entity category_entities = 2; - - // All video segments where a label was detected. - repeated LabelSegment segments = 3; - - // All video frames where a label was detected. - repeated LabelFrame frames = 4; -} - -// Video frame level annotation results for explicit content. -message ExplicitContentFrame { - // Time-offset, relative to the beginning of the video, corresponding to the - // video frame for this location. - google.protobuf.Duration time_offset = 1; - - // Likelihood of the pornography content.. - Likelihood pornography_likelihood = 2; -} - -// Explicit content annotation (based on per-frame visual signals only). -// If no explicit content has been detected in a frame, no annotations are -// present for that frame. -message ExplicitContentAnnotation { - // All video frames where explicit content was detected. - repeated ExplicitContentFrame frames = 1; -} - -// Normalized bounding box. -// The normalized vertex coordinates are relative to the original image. -// Range: [0, 1]. -message NormalizedBoundingBox { - // Left X coordinate. - float left = 1; - - // Top Y coordinate. - float top = 2; - - // Right X coordinate. - float right = 3; - - // Bottom Y coordinate. - float bottom = 4; -} - -// Video segment level annotation results for face detection. -message FaceSegment { - // Video segment where a face was detected. - VideoSegment segment = 1; -} - -// Video frame level annotation results for face detection. -message FaceFrame { - // Normalized Bounding boxes in a frame. - // There can be more than one boxes if the same face is detected in multiple - // locations within the current frame. - repeated NormalizedBoundingBox normalized_bounding_boxes = 1; - - // Time-offset, relative to the beginning of the video, - // corresponding to the video frame for this location. - google.protobuf.Duration time_offset = 2; -} - -// Face annotation. -message FaceAnnotation { - // Thumbnail of a representative face view (in JPEG format). - bytes thumbnail = 1; - - // All video segments where a face was detected. - repeated FaceSegment segments = 2; - - // All video frames where a face was detected. - repeated FaceFrame frames = 3; -} - -// Annotation results for a single video. -message VideoAnnotationResults { - // Video file location in - // [Google Cloud Storage](https://cloud.google.com/storage/). - string input_uri = 1; - - // Label annotations on video level or user specified segment level. - // There is exactly one element for each unique label. - repeated LabelAnnotation segment_label_annotations = 2; - - // Label annotations on shot level. - // There is exactly one element for each unique label. - repeated LabelAnnotation shot_label_annotations = 3; - - // Label annotations on frame level. - // There is exactly one element for each unique label. - repeated LabelAnnotation frame_label_annotations = 4; - - // Face annotations. There is exactly one element for each unique face. - repeated FaceAnnotation face_annotations = 5; - - // Shot annotations. Each shot is represented as a video segment. - repeated VideoSegment shot_annotations = 6; - - // Explicit content annotation. - ExplicitContentAnnotation explicit_annotation = 7; - - // If set, indicates an error. Note that for a single `AnnotateVideoRequest` - // some videos may succeed and some may fail. - google.rpc.Status error = 9; -} - -// Video annotation response. Included in the `response` -// field of the `Operation` returned by the `GetOperation` -// call of the `google::longrunning::Operations` service. -message AnnotateVideoResponse { - // Annotation results for all videos specified in `AnnotateVideoRequest`. - repeated VideoAnnotationResults annotation_results = 1; -} - -// Annotation progress for a single video. -message VideoAnnotationProgress { - // Video file location in - // [Google Cloud Storage](https://cloud.google.com/storage/). - string input_uri = 1; - - // Approximate percentage processed thus far. - // Guaranteed to be 100 when fully processed. - int32 progress_percent = 2; - - // Time when the request was received. - google.protobuf.Timestamp start_time = 3; - - // Time of the most recent update. - google.protobuf.Timestamp update_time = 4; -} - -// Video annotation progress. Included in the `metadata` -// field of the `Operation` returned by the `GetOperation` -// call of the `google::longrunning::Operations` service. -message AnnotateVideoProgress { - // Progress metadata for all videos specified in `AnnotateVideoRequest`. - repeated VideoAnnotationProgress annotation_progress = 1; -} - -// Video annotation feature. -enum Feature { - // Unspecified. - FEATURE_UNSPECIFIED = 0; - - // Label detection. Detect objects, such as dog or flower. - LABEL_DETECTION = 1; - - // Shot change detection. - SHOT_CHANGE_DETECTION = 2; - - // Explicit content detection. - EXPLICIT_CONTENT_DETECTION = 3; - - // Human face detection and tracking. - FACE_DETECTION = 4; -} - -// Label detection mode. -enum LabelDetectionMode { - // Unspecified. - LABEL_DETECTION_MODE_UNSPECIFIED = 0; - - // Detect shot-level labels. - SHOT_MODE = 1; - - // Detect frame-level labels. - FRAME_MODE = 2; - - // Detect both shot-level and frame-level labels. - SHOT_AND_FRAME_MODE = 3; -} - -// Bucketized representation of likelihood. -enum Likelihood { - // Unspecified likelihood. - LIKELIHOOD_UNSPECIFIED = 0; - - // Very unlikely. - VERY_UNLIKELY = 1; - - // Unlikely. - UNLIKELY = 2; - - // Possible. - POSSIBLE = 3; - - // Likely. - LIKELY = 4; - - // Very likely. - VERY_LIKELY = 5; -} diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/proto/video_intelligence_pb2.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/proto/video_intelligence_pb2.py deleted file mode 100644 index 96840dd2622e..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/proto/video_intelligence_pb2.py +++ /dev/null @@ -1,2451 +0,0 @@ -# -*- coding: utf-8 -*- -# Generated by the protocol buffer compiler. DO NOT EDIT! -# source: google/cloud/videointelligence_v1beta2/proto/video_intelligence.proto -"""Generated protocol buffer code.""" -from google.protobuf.internal import enum_type_wrapper -from google.protobuf import descriptor as _descriptor -from google.protobuf import message as _message -from google.protobuf import reflection as _reflection -from google.protobuf import symbol_database as _symbol_database - -# @@protoc_insertion_point(imports) - -_sym_db = _symbol_database.Default() - - -from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2 -from google.api import client_pb2 as google_dot_api_dot_client__pb2 -from google.api import field_behavior_pb2 as google_dot_api_dot_field__behavior__pb2 -from google.longrunning import ( - operations_pb2 as google_dot_longrunning_dot_operations__pb2, -) -from google.protobuf import duration_pb2 as google_dot_protobuf_dot_duration__pb2 -from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2 -from google.rpc import status_pb2 as google_dot_rpc_dot_status__pb2 - - -DESCRIPTOR = _descriptor.FileDescriptor( - name="google/cloud/videointelligence_v1beta2/proto/video_intelligence.proto", - package="google.cloud.videointelligence.v1beta2", - syntax="proto3", - serialized_options=b"\n*com.google.cloud.videointelligence.v1beta2B\035VideoIntelligenceServiceProtoP\001ZWgoogle.golang.org/genproto/googleapis/cloud/videointelligence/v1beta2;videointelligence\252\002&Google.Cloud.VideoIntelligence.V1Beta2\312\002&Google\\Cloud\\VideoIntelligence\\V1beta2\352\002)Google::Cloud::VideoIntelligence::V1beta2", - create_key=_descriptor._internal_create_key, - serialized_pb=b'\nEgoogle/cloud/videointelligence_v1beta2/proto/video_intelligence.proto\x12&google.cloud.videointelligence.v1beta2\x1a\x1cgoogle/api/annotations.proto\x1a\x17google/api/client.proto\x1a\x1fgoogle/api/field_behavior.proto\x1a#google/longrunning/operations.proto\x1a\x1egoogle/protobuf/duration.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x17google/rpc/status.proto"\x88\x02\n\x14\x41nnotateVideoRequest\x12\x11\n\tinput_uri\x18\x01 \x01(\t\x12\x15\n\rinput_content\x18\x06 \x01(\x0c\x12\x46\n\x08\x66\x65\x61tures\x18\x02 \x03(\x0e\x32/.google.cloud.videointelligence.v1beta2.FeatureB\x03\xe0\x41\x02\x12K\n\rvideo_context\x18\x03 \x01(\x0b\x32\x34.google.cloud.videointelligence.v1beta2.VideoContext\x12\x17\n\noutput_uri\x18\x04 \x01(\tB\x03\xe0\x41\x01\x12\x18\n\x0blocation_id\x18\x05 \x01(\tB\x03\xe0\x41\x01"\xec\x03\n\x0cVideoContext\x12\x46\n\x08segments\x18\x01 \x03(\x0b\x32\x34.google.cloud.videointelligence.v1beta2.VideoSegment\x12\\\n\x16label_detection_config\x18\x02 \x01(\x0b\x32<.google.cloud.videointelligence.v1beta2.LabelDetectionConfig\x12g\n\x1cshot_change_detection_config\x18\x03 \x01(\x0b\x32\x41.google.cloud.videointelligence.v1beta2.ShotChangeDetectionConfig\x12q\n!explicit_content_detection_config\x18\x04 \x01(\x0b\x32\x46.google.cloud.videointelligence.v1beta2.ExplicitContentDetectionConfig\x12Z\n\x15\x66\x61\x63\x65_detection_config\x18\x05 \x01(\x0b\x32;.google.cloud.videointelligence.v1beta2.FaceDetectionConfig"\x9a\x01\n\x14LabelDetectionConfig\x12X\n\x14label_detection_mode\x18\x01 \x01(\x0e\x32:.google.cloud.videointelligence.v1beta2.LabelDetectionMode\x12\x19\n\x11stationary_camera\x18\x02 \x01(\x08\x12\r\n\x05model\x18\x03 \x01(\t"*\n\x19ShotChangeDetectionConfig\x12\r\n\x05model\x18\x01 \x01(\t"/\n\x1e\x45xplicitContentDetectionConfig\x12\r\n\x05model\x18\x01 \x01(\t"D\n\x13\x46\x61\x63\x65\x44\x65tectionConfig\x12\r\n\x05model\x18\x01 \x01(\t\x12\x1e\n\x16include_bounding_boxes\x18\x02 \x01(\x08"x\n\x0cVideoSegment\x12\x34\n\x11start_time_offset\x18\x01 \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x32\n\x0f\x65nd_time_offset\x18\x02 \x01(\x0b\x32\x19.google.protobuf.Duration"i\n\x0cLabelSegment\x12\x45\n\x07segment\x18\x01 \x01(\x0b\x32\x34.google.cloud.videointelligence.v1beta2.VideoSegment\x12\x12\n\nconfidence\x18\x02 \x01(\x02"P\n\nLabelFrame\x12.\n\x0btime_offset\x18\x01 \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x12\n\nconfidence\x18\x02 \x01(\x02"G\n\x06\x45ntity\x12\x11\n\tentity_id\x18\x01 \x01(\t\x12\x13\n\x0b\x64\x65scription\x18\x02 \x01(\t\x12\x15\n\rlanguage_code\x18\x03 \x01(\t"\xa8\x02\n\x0fLabelAnnotation\x12>\n\x06\x65ntity\x18\x01 \x01(\x0b\x32..google.cloud.videointelligence.v1beta2.Entity\x12I\n\x11\x63\x61tegory_entities\x18\x02 \x03(\x0b\x32..google.cloud.videointelligence.v1beta2.Entity\x12\x46\n\x08segments\x18\x03 \x03(\x0b\x32\x34.google.cloud.videointelligence.v1beta2.LabelSegment\x12\x42\n\x06\x66rames\x18\x04 \x03(\x0b\x32\x32.google.cloud.videointelligence.v1beta2.LabelFrame"\x9a\x01\n\x14\x45xplicitContentFrame\x12.\n\x0btime_offset\x18\x01 \x01(\x0b\x32\x19.google.protobuf.Duration\x12R\n\x16pornography_likelihood\x18\x02 \x01(\x0e\x32\x32.google.cloud.videointelligence.v1beta2.Likelihood"i\n\x19\x45xplicitContentAnnotation\x12L\n\x06\x66rames\x18\x01 \x03(\x0b\x32<.google.cloud.videointelligence.v1beta2.ExplicitContentFrame"Q\n\x15NormalizedBoundingBox\x12\x0c\n\x04left\x18\x01 \x01(\x02\x12\x0b\n\x03top\x18\x02 \x01(\x02\x12\r\n\x05right\x18\x03 \x01(\x02\x12\x0e\n\x06\x62ottom\x18\x04 \x01(\x02"T\n\x0b\x46\x61\x63\x65Segment\x12\x45\n\x07segment\x18\x01 \x01(\x0b\x32\x34.google.cloud.videointelligence.v1beta2.VideoSegment"\x9d\x01\n\tFaceFrame\x12`\n\x19normalized_bounding_boxes\x18\x01 \x03(\x0b\x32=.google.cloud.videointelligence.v1beta2.NormalizedBoundingBox\x12.\n\x0btime_offset\x18\x02 \x01(\x0b\x32\x19.google.protobuf.Duration"\xad\x01\n\x0e\x46\x61\x63\x65\x41nnotation\x12\x11\n\tthumbnail\x18\x01 \x01(\x0c\x12\x45\n\x08segments\x18\x02 \x03(\x0b\x32\x33.google.cloud.videointelligence.v1beta2.FaceSegment\x12\x41\n\x06\x66rames\x18\x03 \x03(\x0b\x32\x31.google.cloud.videointelligence.v1beta2.FaceFrame"\xdf\x04\n\x16VideoAnnotationResults\x12\x11\n\tinput_uri\x18\x01 \x01(\t\x12Z\n\x19segment_label_annotations\x18\x02 \x03(\x0b\x32\x37.google.cloud.videointelligence.v1beta2.LabelAnnotation\x12W\n\x16shot_label_annotations\x18\x03 \x03(\x0b\x32\x37.google.cloud.videointelligence.v1beta2.LabelAnnotation\x12X\n\x17\x66rame_label_annotations\x18\x04 \x03(\x0b\x32\x37.google.cloud.videointelligence.v1beta2.LabelAnnotation\x12P\n\x10\x66\x61\x63\x65_annotations\x18\x05 \x03(\x0b\x32\x36.google.cloud.videointelligence.v1beta2.FaceAnnotation\x12N\n\x10shot_annotations\x18\x06 \x03(\x0b\x32\x34.google.cloud.videointelligence.v1beta2.VideoSegment\x12^\n\x13\x65xplicit_annotation\x18\x07 \x01(\x0b\x32\x41.google.cloud.videointelligence.v1beta2.ExplicitContentAnnotation\x12!\n\x05\x65rror\x18\t \x01(\x0b\x32\x12.google.rpc.Status"s\n\x15\x41nnotateVideoResponse\x12Z\n\x12\x61nnotation_results\x18\x01 \x03(\x0b\x32>.google.cloud.videointelligence.v1beta2.VideoAnnotationResults"\xa7\x01\n\x17VideoAnnotationProgress\x12\x11\n\tinput_uri\x18\x01 \x01(\t\x12\x18\n\x10progress_percent\x18\x02 \x01(\x05\x12.\n\nstart_time\x18\x03 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12/\n\x0bupdate_time\x18\x04 \x01(\x0b\x32\x1a.google.protobuf.Timestamp"u\n\x15\x41nnotateVideoProgress\x12\\\n\x13\x61nnotation_progress\x18\x01 \x03(\x0b\x32?.google.cloud.videointelligence.v1beta2.VideoAnnotationProgress*\x86\x01\n\x07\x46\x65\x61ture\x12\x17\n\x13\x46\x45\x41TURE_UNSPECIFIED\x10\x00\x12\x13\n\x0fLABEL_DETECTION\x10\x01\x12\x19\n\x15SHOT_CHANGE_DETECTION\x10\x02\x12\x1e\n\x1a\x45XPLICIT_CONTENT_DETECTION\x10\x03\x12\x12\n\x0e\x46\x41\x43\x45_DETECTION\x10\x04*r\n\x12LabelDetectionMode\x12$\n LABEL_DETECTION_MODE_UNSPECIFIED\x10\x00\x12\r\n\tSHOT_MODE\x10\x01\x12\x0e\n\nFRAME_MODE\x10\x02\x12\x17\n\x13SHOT_AND_FRAME_MODE\x10\x03*t\n\nLikelihood\x12\x1a\n\x16LIKELIHOOD_UNSPECIFIED\x10\x00\x12\x11\n\rVERY_UNLIKELY\x10\x01\x12\x0c\n\x08UNLIKELY\x10\x02\x12\x0c\n\x08POSSIBLE\x10\x03\x12\n\n\x06LIKELY\x10\x04\x12\x0f\n\x0bVERY_LIKELY\x10\x05\x32\xca\x02\n\x18VideoIntelligenceService\x12\xd7\x01\n\rAnnotateVideo\x12<.google.cloud.videointelligence.v1beta2.AnnotateVideoRequest\x1a\x1d.google.longrunning.Operation"i\x82\xd3\xe4\x93\x02\x1d"\x18/v1beta2/videos:annotate:\x01*\xda\x41\x12input_uri,features\xca\x41.\n\x15\x41nnotateVideoResponse\x12\x15\x41nnotateVideoProgress\x1aT\xca\x41 videointelligence.googleapis.com\xd2\x41.https://www.googleapis.com/auth/cloud-platformB\xa4\x02\n*com.google.cloud.videointelligence.v1beta2B\x1dVideoIntelligenceServiceProtoP\x01ZWgoogle.golang.org/genproto/googleapis/cloud/videointelligence/v1beta2;videointelligence\xaa\x02&Google.Cloud.VideoIntelligence.V1Beta2\xca\x02&Google\\Cloud\\VideoIntelligence\\V1beta2\xea\x02)Google::Cloud::VideoIntelligence::V1beta2b\x06proto3', - dependencies=[ - google_dot_api_dot_annotations__pb2.DESCRIPTOR, - google_dot_api_dot_client__pb2.DESCRIPTOR, - google_dot_api_dot_field__behavior__pb2.DESCRIPTOR, - google_dot_longrunning_dot_operations__pb2.DESCRIPTOR, - google_dot_protobuf_dot_duration__pb2.DESCRIPTOR, - google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR, - google_dot_rpc_dot_status__pb2.DESCRIPTOR, - ], -) - -_FEATURE = _descriptor.EnumDescriptor( - name="Feature", - full_name="google.cloud.videointelligence.v1beta2.Feature", - filename=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - values=[ - _descriptor.EnumValueDescriptor( - name="FEATURE_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="LABEL_DETECTION", - index=1, - number=1, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="SHOT_CHANGE_DETECTION", - index=2, - number=2, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="EXPLICIT_CONTENT_DETECTION", - index=3, - number=3, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="FACE_DETECTION", - index=4, - number=4, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=3879, - serialized_end=4013, -) -_sym_db.RegisterEnumDescriptor(_FEATURE) - -Feature = enum_type_wrapper.EnumTypeWrapper(_FEATURE) -_LABELDETECTIONMODE = _descriptor.EnumDescriptor( - name="LabelDetectionMode", - full_name="google.cloud.videointelligence.v1beta2.LabelDetectionMode", - filename=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - values=[ - _descriptor.EnumValueDescriptor( - name="LABEL_DETECTION_MODE_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="SHOT_MODE", - index=1, - number=1, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="FRAME_MODE", - index=2, - number=2, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="SHOT_AND_FRAME_MODE", - index=3, - number=3, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=4015, - serialized_end=4129, -) -_sym_db.RegisterEnumDescriptor(_LABELDETECTIONMODE) - -LabelDetectionMode = enum_type_wrapper.EnumTypeWrapper(_LABELDETECTIONMODE) -_LIKELIHOOD = _descriptor.EnumDescriptor( - name="Likelihood", - full_name="google.cloud.videointelligence.v1beta2.Likelihood", - filename=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - values=[ - _descriptor.EnumValueDescriptor( - name="LIKELIHOOD_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="VERY_UNLIKELY", - index=1, - number=1, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="UNLIKELY", - index=2, - number=2, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="POSSIBLE", - index=3, - number=3, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="LIKELY", - index=4, - number=4, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="VERY_LIKELY", - index=5, - number=5, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=4131, - serialized_end=4247, -) -_sym_db.RegisterEnumDescriptor(_LIKELIHOOD) - -Likelihood = enum_type_wrapper.EnumTypeWrapper(_LIKELIHOOD) -FEATURE_UNSPECIFIED = 0 -LABEL_DETECTION = 1 -SHOT_CHANGE_DETECTION = 2 -EXPLICIT_CONTENT_DETECTION = 3 -FACE_DETECTION = 4 -LABEL_DETECTION_MODE_UNSPECIFIED = 0 -SHOT_MODE = 1 -FRAME_MODE = 2 -SHOT_AND_FRAME_MODE = 3 -LIKELIHOOD_UNSPECIFIED = 0 -VERY_UNLIKELY = 1 -UNLIKELY = 2 -POSSIBLE = 3 -LIKELY = 4 -VERY_LIKELY = 5 - - -_ANNOTATEVIDEOREQUEST = _descriptor.Descriptor( - name="AnnotateVideoRequest", - full_name="google.cloud.videointelligence.v1beta2.AnnotateVideoRequest", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="input_uri", - full_name="google.cloud.videointelligence.v1beta2.AnnotateVideoRequest.input_uri", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="input_content", - full_name="google.cloud.videointelligence.v1beta2.AnnotateVideoRequest.input_content", - index=1, - number=6, - type=12, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"", - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="features", - full_name="google.cloud.videointelligence.v1beta2.AnnotateVideoRequest.features", - index=2, - number=2, - type=14, - cpp_type=8, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\002", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="video_context", - full_name="google.cloud.videointelligence.v1beta2.AnnotateVideoRequest.video_context", - index=3, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="output_uri", - full_name="google.cloud.videointelligence.v1beta2.AnnotateVideoRequest.output_uri", - index=4, - number=4, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="location_id", - full_name="google.cloud.videointelligence.v1beta2.AnnotateVideoRequest.location_id", - index=5, - number=5, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=329, - serialized_end=593, -) - - -_VIDEOCONTEXT = _descriptor.Descriptor( - name="VideoContext", - full_name="google.cloud.videointelligence.v1beta2.VideoContext", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="segments", - full_name="google.cloud.videointelligence.v1beta2.VideoContext.segments", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="label_detection_config", - full_name="google.cloud.videointelligence.v1beta2.VideoContext.label_detection_config", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="shot_change_detection_config", - full_name="google.cloud.videointelligence.v1beta2.VideoContext.shot_change_detection_config", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="explicit_content_detection_config", - full_name="google.cloud.videointelligence.v1beta2.VideoContext.explicit_content_detection_config", - index=3, - number=4, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="face_detection_config", - full_name="google.cloud.videointelligence.v1beta2.VideoContext.face_detection_config", - index=4, - number=5, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=596, - serialized_end=1088, -) - - -_LABELDETECTIONCONFIG = _descriptor.Descriptor( - name="LabelDetectionConfig", - full_name="google.cloud.videointelligence.v1beta2.LabelDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="label_detection_mode", - full_name="google.cloud.videointelligence.v1beta2.LabelDetectionConfig.label_detection_mode", - index=0, - number=1, - type=14, - cpp_type=8, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="stationary_camera", - full_name="google.cloud.videointelligence.v1beta2.LabelDetectionConfig.stationary_camera", - index=1, - number=2, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="model", - full_name="google.cloud.videointelligence.v1beta2.LabelDetectionConfig.model", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1091, - serialized_end=1245, -) - - -_SHOTCHANGEDETECTIONCONFIG = _descriptor.Descriptor( - name="ShotChangeDetectionConfig", - full_name="google.cloud.videointelligence.v1beta2.ShotChangeDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="model", - full_name="google.cloud.videointelligence.v1beta2.ShotChangeDetectionConfig.model", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1247, - serialized_end=1289, -) - - -_EXPLICITCONTENTDETECTIONCONFIG = _descriptor.Descriptor( - name="ExplicitContentDetectionConfig", - full_name="google.cloud.videointelligence.v1beta2.ExplicitContentDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="model", - full_name="google.cloud.videointelligence.v1beta2.ExplicitContentDetectionConfig.model", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1291, - serialized_end=1338, -) - - -_FACEDETECTIONCONFIG = _descriptor.Descriptor( - name="FaceDetectionConfig", - full_name="google.cloud.videointelligence.v1beta2.FaceDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="model", - full_name="google.cloud.videointelligence.v1beta2.FaceDetectionConfig.model", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="include_bounding_boxes", - full_name="google.cloud.videointelligence.v1beta2.FaceDetectionConfig.include_bounding_boxes", - index=1, - number=2, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1340, - serialized_end=1408, -) - - -_VIDEOSEGMENT = _descriptor.Descriptor( - name="VideoSegment", - full_name="google.cloud.videointelligence.v1beta2.VideoSegment", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="start_time_offset", - full_name="google.cloud.videointelligence.v1beta2.VideoSegment.start_time_offset", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="end_time_offset", - full_name="google.cloud.videointelligence.v1beta2.VideoSegment.end_time_offset", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1410, - serialized_end=1530, -) - - -_LABELSEGMENT = _descriptor.Descriptor( - name="LabelSegment", - full_name="google.cloud.videointelligence.v1beta2.LabelSegment", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="segment", - full_name="google.cloud.videointelligence.v1beta2.LabelSegment.segment", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1beta2.LabelSegment.confidence", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1532, - serialized_end=1637, -) - - -_LABELFRAME = _descriptor.Descriptor( - name="LabelFrame", - full_name="google.cloud.videointelligence.v1beta2.LabelFrame", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="time_offset", - full_name="google.cloud.videointelligence.v1beta2.LabelFrame.time_offset", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1beta2.LabelFrame.confidence", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1639, - serialized_end=1719, -) - - -_ENTITY = _descriptor.Descriptor( - name="Entity", - full_name="google.cloud.videointelligence.v1beta2.Entity", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="entity_id", - full_name="google.cloud.videointelligence.v1beta2.Entity.entity_id", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="description", - full_name="google.cloud.videointelligence.v1beta2.Entity.description", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="language_code", - full_name="google.cloud.videointelligence.v1beta2.Entity.language_code", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1721, - serialized_end=1792, -) - - -_LABELANNOTATION = _descriptor.Descriptor( - name="LabelAnnotation", - full_name="google.cloud.videointelligence.v1beta2.LabelAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="entity", - full_name="google.cloud.videointelligence.v1beta2.LabelAnnotation.entity", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="category_entities", - full_name="google.cloud.videointelligence.v1beta2.LabelAnnotation.category_entities", - index=1, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segments", - full_name="google.cloud.videointelligence.v1beta2.LabelAnnotation.segments", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="frames", - full_name="google.cloud.videointelligence.v1beta2.LabelAnnotation.frames", - index=3, - number=4, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1795, - serialized_end=2091, -) - - -_EXPLICITCONTENTFRAME = _descriptor.Descriptor( - name="ExplicitContentFrame", - full_name="google.cloud.videointelligence.v1beta2.ExplicitContentFrame", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="time_offset", - full_name="google.cloud.videointelligence.v1beta2.ExplicitContentFrame.time_offset", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="pornography_likelihood", - full_name="google.cloud.videointelligence.v1beta2.ExplicitContentFrame.pornography_likelihood", - index=1, - number=2, - type=14, - cpp_type=8, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2094, - serialized_end=2248, -) - - -_EXPLICITCONTENTANNOTATION = _descriptor.Descriptor( - name="ExplicitContentAnnotation", - full_name="google.cloud.videointelligence.v1beta2.ExplicitContentAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="frames", - full_name="google.cloud.videointelligence.v1beta2.ExplicitContentAnnotation.frames", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2250, - serialized_end=2355, -) - - -_NORMALIZEDBOUNDINGBOX = _descriptor.Descriptor( - name="NormalizedBoundingBox", - full_name="google.cloud.videointelligence.v1beta2.NormalizedBoundingBox", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="left", - full_name="google.cloud.videointelligence.v1beta2.NormalizedBoundingBox.left", - index=0, - number=1, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="top", - full_name="google.cloud.videointelligence.v1beta2.NormalizedBoundingBox.top", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="right", - full_name="google.cloud.videointelligence.v1beta2.NormalizedBoundingBox.right", - index=2, - number=3, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="bottom", - full_name="google.cloud.videointelligence.v1beta2.NormalizedBoundingBox.bottom", - index=3, - number=4, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2357, - serialized_end=2438, -) - - -_FACESEGMENT = _descriptor.Descriptor( - name="FaceSegment", - full_name="google.cloud.videointelligence.v1beta2.FaceSegment", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="segment", - full_name="google.cloud.videointelligence.v1beta2.FaceSegment.segment", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2440, - serialized_end=2524, -) - - -_FACEFRAME = _descriptor.Descriptor( - name="FaceFrame", - full_name="google.cloud.videointelligence.v1beta2.FaceFrame", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="normalized_bounding_boxes", - full_name="google.cloud.videointelligence.v1beta2.FaceFrame.normalized_bounding_boxes", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="time_offset", - full_name="google.cloud.videointelligence.v1beta2.FaceFrame.time_offset", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2527, - serialized_end=2684, -) - - -_FACEANNOTATION = _descriptor.Descriptor( - name="FaceAnnotation", - full_name="google.cloud.videointelligence.v1beta2.FaceAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="thumbnail", - full_name="google.cloud.videointelligence.v1beta2.FaceAnnotation.thumbnail", - index=0, - number=1, - type=12, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"", - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segments", - full_name="google.cloud.videointelligence.v1beta2.FaceAnnotation.segments", - index=1, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="frames", - full_name="google.cloud.videointelligence.v1beta2.FaceAnnotation.frames", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2687, - serialized_end=2860, -) - - -_VIDEOANNOTATIONRESULTS = _descriptor.Descriptor( - name="VideoAnnotationResults", - full_name="google.cloud.videointelligence.v1beta2.VideoAnnotationResults", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="input_uri", - full_name="google.cloud.videointelligence.v1beta2.VideoAnnotationResults.input_uri", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segment_label_annotations", - full_name="google.cloud.videointelligence.v1beta2.VideoAnnotationResults.segment_label_annotations", - index=1, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="shot_label_annotations", - full_name="google.cloud.videointelligence.v1beta2.VideoAnnotationResults.shot_label_annotations", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="frame_label_annotations", - full_name="google.cloud.videointelligence.v1beta2.VideoAnnotationResults.frame_label_annotations", - index=3, - number=4, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="face_annotations", - full_name="google.cloud.videointelligence.v1beta2.VideoAnnotationResults.face_annotations", - index=4, - number=5, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="shot_annotations", - full_name="google.cloud.videointelligence.v1beta2.VideoAnnotationResults.shot_annotations", - index=5, - number=6, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="explicit_annotation", - full_name="google.cloud.videointelligence.v1beta2.VideoAnnotationResults.explicit_annotation", - index=6, - number=7, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="error", - full_name="google.cloud.videointelligence.v1beta2.VideoAnnotationResults.error", - index=7, - number=9, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2863, - serialized_end=3470, -) - - -_ANNOTATEVIDEORESPONSE = _descriptor.Descriptor( - name="AnnotateVideoResponse", - full_name="google.cloud.videointelligence.v1beta2.AnnotateVideoResponse", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="annotation_results", - full_name="google.cloud.videointelligence.v1beta2.AnnotateVideoResponse.annotation_results", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3472, - serialized_end=3587, -) - - -_VIDEOANNOTATIONPROGRESS = _descriptor.Descriptor( - name="VideoAnnotationProgress", - full_name="google.cloud.videointelligence.v1beta2.VideoAnnotationProgress", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="input_uri", - full_name="google.cloud.videointelligence.v1beta2.VideoAnnotationProgress.input_uri", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="progress_percent", - full_name="google.cloud.videointelligence.v1beta2.VideoAnnotationProgress.progress_percent", - index=1, - number=2, - type=5, - cpp_type=1, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="start_time", - full_name="google.cloud.videointelligence.v1beta2.VideoAnnotationProgress.start_time", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="update_time", - full_name="google.cloud.videointelligence.v1beta2.VideoAnnotationProgress.update_time", - index=3, - number=4, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3590, - serialized_end=3757, -) - - -_ANNOTATEVIDEOPROGRESS = _descriptor.Descriptor( - name="AnnotateVideoProgress", - full_name="google.cloud.videointelligence.v1beta2.AnnotateVideoProgress", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="annotation_progress", - full_name="google.cloud.videointelligence.v1beta2.AnnotateVideoProgress.annotation_progress", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3759, - serialized_end=3876, -) - -_ANNOTATEVIDEOREQUEST.fields_by_name["features"].enum_type = _FEATURE -_ANNOTATEVIDEOREQUEST.fields_by_name["video_context"].message_type = _VIDEOCONTEXT -_VIDEOCONTEXT.fields_by_name["segments"].message_type = _VIDEOSEGMENT -_VIDEOCONTEXT.fields_by_name[ - "label_detection_config" -].message_type = _LABELDETECTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "shot_change_detection_config" -].message_type = _SHOTCHANGEDETECTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "explicit_content_detection_config" -].message_type = _EXPLICITCONTENTDETECTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "face_detection_config" -].message_type = _FACEDETECTIONCONFIG -_LABELDETECTIONCONFIG.fields_by_name[ - "label_detection_mode" -].enum_type = _LABELDETECTIONMODE -_VIDEOSEGMENT.fields_by_name[ - "start_time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_VIDEOSEGMENT.fields_by_name[ - "end_time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_LABELSEGMENT.fields_by_name["segment"].message_type = _VIDEOSEGMENT -_LABELFRAME.fields_by_name[ - "time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_LABELANNOTATION.fields_by_name["entity"].message_type = _ENTITY -_LABELANNOTATION.fields_by_name["category_entities"].message_type = _ENTITY -_LABELANNOTATION.fields_by_name["segments"].message_type = _LABELSEGMENT -_LABELANNOTATION.fields_by_name["frames"].message_type = _LABELFRAME -_EXPLICITCONTENTFRAME.fields_by_name[ - "time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_EXPLICITCONTENTFRAME.fields_by_name["pornography_likelihood"].enum_type = _LIKELIHOOD -_EXPLICITCONTENTANNOTATION.fields_by_name["frames"].message_type = _EXPLICITCONTENTFRAME -_FACESEGMENT.fields_by_name["segment"].message_type = _VIDEOSEGMENT -_FACEFRAME.fields_by_name[ - "normalized_bounding_boxes" -].message_type = _NORMALIZEDBOUNDINGBOX -_FACEFRAME.fields_by_name[ - "time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_FACEANNOTATION.fields_by_name["segments"].message_type = _FACESEGMENT -_FACEANNOTATION.fields_by_name["frames"].message_type = _FACEFRAME -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "segment_label_annotations" -].message_type = _LABELANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "shot_label_annotations" -].message_type = _LABELANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "frame_label_annotations" -].message_type = _LABELANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "face_annotations" -].message_type = _FACEANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name["shot_annotations"].message_type = _VIDEOSEGMENT -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "explicit_annotation" -].message_type = _EXPLICITCONTENTANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "error" -].message_type = google_dot_rpc_dot_status__pb2._STATUS -_ANNOTATEVIDEORESPONSE.fields_by_name[ - "annotation_results" -].message_type = _VIDEOANNOTATIONRESULTS -_VIDEOANNOTATIONPROGRESS.fields_by_name[ - "start_time" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -_VIDEOANNOTATIONPROGRESS.fields_by_name[ - "update_time" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -_ANNOTATEVIDEOPROGRESS.fields_by_name[ - "annotation_progress" -].message_type = _VIDEOANNOTATIONPROGRESS -DESCRIPTOR.message_types_by_name["AnnotateVideoRequest"] = _ANNOTATEVIDEOREQUEST -DESCRIPTOR.message_types_by_name["VideoContext"] = _VIDEOCONTEXT -DESCRIPTOR.message_types_by_name["LabelDetectionConfig"] = _LABELDETECTIONCONFIG -DESCRIPTOR.message_types_by_name[ - "ShotChangeDetectionConfig" -] = _SHOTCHANGEDETECTIONCONFIG -DESCRIPTOR.message_types_by_name[ - "ExplicitContentDetectionConfig" -] = _EXPLICITCONTENTDETECTIONCONFIG -DESCRIPTOR.message_types_by_name["FaceDetectionConfig"] = _FACEDETECTIONCONFIG -DESCRIPTOR.message_types_by_name["VideoSegment"] = _VIDEOSEGMENT -DESCRIPTOR.message_types_by_name["LabelSegment"] = _LABELSEGMENT -DESCRIPTOR.message_types_by_name["LabelFrame"] = _LABELFRAME -DESCRIPTOR.message_types_by_name["Entity"] = _ENTITY -DESCRIPTOR.message_types_by_name["LabelAnnotation"] = _LABELANNOTATION -DESCRIPTOR.message_types_by_name["ExplicitContentFrame"] = _EXPLICITCONTENTFRAME -DESCRIPTOR.message_types_by_name[ - "ExplicitContentAnnotation" -] = _EXPLICITCONTENTANNOTATION -DESCRIPTOR.message_types_by_name["NormalizedBoundingBox"] = _NORMALIZEDBOUNDINGBOX -DESCRIPTOR.message_types_by_name["FaceSegment"] = _FACESEGMENT -DESCRIPTOR.message_types_by_name["FaceFrame"] = _FACEFRAME -DESCRIPTOR.message_types_by_name["FaceAnnotation"] = _FACEANNOTATION -DESCRIPTOR.message_types_by_name["VideoAnnotationResults"] = _VIDEOANNOTATIONRESULTS -DESCRIPTOR.message_types_by_name["AnnotateVideoResponse"] = _ANNOTATEVIDEORESPONSE -DESCRIPTOR.message_types_by_name["VideoAnnotationProgress"] = _VIDEOANNOTATIONPROGRESS -DESCRIPTOR.message_types_by_name["AnnotateVideoProgress"] = _ANNOTATEVIDEOPROGRESS -DESCRIPTOR.enum_types_by_name["Feature"] = _FEATURE -DESCRIPTOR.enum_types_by_name["LabelDetectionMode"] = _LABELDETECTIONMODE -DESCRIPTOR.enum_types_by_name["Likelihood"] = _LIKELIHOOD -_sym_db.RegisterFileDescriptor(DESCRIPTOR) - -AnnotateVideoRequest = _reflection.GeneratedProtocolMessageType( - "AnnotateVideoRequest", - (_message.Message,), - { - "DESCRIPTOR": _ANNOTATEVIDEOREQUEST, - "__module__": "google.cloud.videointelligence_v1beta2.proto.video_intelligence_pb2", - "__doc__": """Video annotation request. - - - Attributes: - input_uri: - Input video location. Currently, only `Google Cloud Storage - `__ URIs are supported, - which must be specified in the following format: - ``gs://bucket-id/object-id`` (other URI formats return [google - .rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]) - . For more information, see `Request URIs - `__. - A video URI may include wildcards in ``object-id``, and thus - identify multiple videos. Supported wildcards: ’*’ to match 0 - or more characters; ‘?’ to match 1 character. If unset, the - input video should be embedded in the request as - ``input_content``. If set, ``input_content`` should be unset. - input_content: - The video data bytes. If unset, the input video(s) should be - specified via ``input_uri``. If set, ``input_uri`` should be - unset. - features: - Required. Requested video annotation features. - video_context: - Additional video context and/or feature-specific parameters. - output_uri: - Optional. Location where the output (in JSON format) should be - stored. Currently, only `Google Cloud Storage - `__ URIs are supported, - which must be specified in the following format: - ``gs://bucket-id/object-id`` (other URI formats return [google - .rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]) - . For more information, see `Request URIs - `__. - location_id: - Optional. Cloud region where annotation should take place. - Supported cloud regions: ``us-east1``, ``us-west1``, ``europe- - west1``, ``asia-east1``. If no region is specified, a region - will be determined based on video file location. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1beta2.AnnotateVideoRequest) - }, -) -_sym_db.RegisterMessage(AnnotateVideoRequest) - -VideoContext = _reflection.GeneratedProtocolMessageType( - "VideoContext", - (_message.Message,), - { - "DESCRIPTOR": _VIDEOCONTEXT, - "__module__": "google.cloud.videointelligence_v1beta2.proto.video_intelligence_pb2", - "__doc__": """Video context and/or feature-specific parameters. - - - Attributes: - segments: - Video segments to annotate. The segments may overlap and are - not required to be contiguous or span the whole video. If - unspecified, each video is treated as a single segment. - label_detection_config: - Config for LABEL_DETECTION. - shot_change_detection_config: - Config for SHOT_CHANGE_DETECTION. - explicit_content_detection_config: - Config for EXPLICIT_CONTENT_DETECTION. - face_detection_config: - Config for FACE_DETECTION. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1beta2.VideoContext) - }, -) -_sym_db.RegisterMessage(VideoContext) - -LabelDetectionConfig = _reflection.GeneratedProtocolMessageType( - "LabelDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _LABELDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1beta2.proto.video_intelligence_pb2", - "__doc__": """Config for LABEL_DETECTION. - - - Attributes: - label_detection_mode: - What labels should be detected with LABEL_DETECTION, in - addition to video-level labels or segment-level labels. If - unspecified, defaults to ``SHOT_MODE``. - stationary_camera: - Whether the video has been shot from a stationary (i.e. non- - moving) camera. When set to true, might improve detection - accuracy for moving objects. Should be used with - ``SHOT_AND_FRAME_MODE`` enabled. - model: - Model to use for label detection. Supported values: - “builtin/stable” (the default if unset) and “builtin/latest”. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1beta2.LabelDetectionConfig) - }, -) -_sym_db.RegisterMessage(LabelDetectionConfig) - -ShotChangeDetectionConfig = _reflection.GeneratedProtocolMessageType( - "ShotChangeDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _SHOTCHANGEDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1beta2.proto.video_intelligence_pb2", - "__doc__": """Config for SHOT_CHANGE_DETECTION. - - - Attributes: - model: - Model to use for shot change detection. Supported values: - “builtin/stable” (the default if unset) and “builtin/latest”. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1beta2.ShotChangeDetectionConfig) - }, -) -_sym_db.RegisterMessage(ShotChangeDetectionConfig) - -ExplicitContentDetectionConfig = _reflection.GeneratedProtocolMessageType( - "ExplicitContentDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _EXPLICITCONTENTDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1beta2.proto.video_intelligence_pb2", - "__doc__": """Config for EXPLICIT_CONTENT_DETECTION. - - - Attributes: - model: - Model to use for explicit content detection. Supported values: - “builtin/stable” (the default if unset) and “builtin/latest”. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1beta2.ExplicitContentDetectionConfig) - }, -) -_sym_db.RegisterMessage(ExplicitContentDetectionConfig) - -FaceDetectionConfig = _reflection.GeneratedProtocolMessageType( - "FaceDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _FACEDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1beta2.proto.video_intelligence_pb2", - "__doc__": """Config for FACE_DETECTION. - - - Attributes: - model: - Model to use for face detection. Supported values: - “builtin/stable” (the default if unset) and “builtin/latest”. - include_bounding_boxes: - Whether bounding boxes be included in the face annotation - output. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1beta2.FaceDetectionConfig) - }, -) -_sym_db.RegisterMessage(FaceDetectionConfig) - -VideoSegment = _reflection.GeneratedProtocolMessageType( - "VideoSegment", - (_message.Message,), - { - "DESCRIPTOR": _VIDEOSEGMENT, - "__module__": "google.cloud.videointelligence_v1beta2.proto.video_intelligence_pb2", - "__doc__": """Video segment. - - - Attributes: - start_time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the start of the segment (inclusive). - end_time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the end of the segment (inclusive). - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1beta2.VideoSegment) - }, -) -_sym_db.RegisterMessage(VideoSegment) - -LabelSegment = _reflection.GeneratedProtocolMessageType( - "LabelSegment", - (_message.Message,), - { - "DESCRIPTOR": _LABELSEGMENT, - "__module__": "google.cloud.videointelligence_v1beta2.proto.video_intelligence_pb2", - "__doc__": """Video segment level annotation results for label detection. - - - Attributes: - segment: - Video segment where a label was detected. - confidence: - Confidence that the label is accurate. Range: [0, 1]. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1beta2.LabelSegment) - }, -) -_sym_db.RegisterMessage(LabelSegment) - -LabelFrame = _reflection.GeneratedProtocolMessageType( - "LabelFrame", - (_message.Message,), - { - "DESCRIPTOR": _LABELFRAME, - "__module__": "google.cloud.videointelligence_v1beta2.proto.video_intelligence_pb2", - "__doc__": """Video frame level annotation results for label detection. - - - Attributes: - time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the video frame for this location. - confidence: - Confidence that the label is accurate. Range: [0, 1]. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1beta2.LabelFrame) - }, -) -_sym_db.RegisterMessage(LabelFrame) - -Entity = _reflection.GeneratedProtocolMessageType( - "Entity", - (_message.Message,), - { - "DESCRIPTOR": _ENTITY, - "__module__": "google.cloud.videointelligence_v1beta2.proto.video_intelligence_pb2", - "__doc__": """Detected entity from video analysis. - - - Attributes: - entity_id: - Opaque entity ID. Some IDs may be available in `Google - Knowledge Graph Search API - `__. - description: - Textual description, e.g. ``Fixed-gear bicycle``. - language_code: - Language code for ``description`` in BCP-47 format. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1beta2.Entity) - }, -) -_sym_db.RegisterMessage(Entity) - -LabelAnnotation = _reflection.GeneratedProtocolMessageType( - "LabelAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _LABELANNOTATION, - "__module__": "google.cloud.videointelligence_v1beta2.proto.video_intelligence_pb2", - "__doc__": """Label annotation. - - - Attributes: - entity: - Detected entity. - category_entities: - Common categories for the detected entity. E.g. when the label - is ``Terrier`` the category is likely ``dog``. And in some - cases there might be more than one categories e.g. ``Terrier`` - could also be a ``pet``. - segments: - All video segments where a label was detected. - frames: - All video frames where a label was detected. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1beta2.LabelAnnotation) - }, -) -_sym_db.RegisterMessage(LabelAnnotation) - -ExplicitContentFrame = _reflection.GeneratedProtocolMessageType( - "ExplicitContentFrame", - (_message.Message,), - { - "DESCRIPTOR": _EXPLICITCONTENTFRAME, - "__module__": "google.cloud.videointelligence_v1beta2.proto.video_intelligence_pb2", - "__doc__": """Video frame level annotation results for explicit content. - - - Attributes: - time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the video frame for this location. - pornography_likelihood: - Likelihood of the pornography content.. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1beta2.ExplicitContentFrame) - }, -) -_sym_db.RegisterMessage(ExplicitContentFrame) - -ExplicitContentAnnotation = _reflection.GeneratedProtocolMessageType( - "ExplicitContentAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _EXPLICITCONTENTANNOTATION, - "__module__": "google.cloud.videointelligence_v1beta2.proto.video_intelligence_pb2", - "__doc__": """Explicit content annotation (based on per-frame visual signals only). - If no explicit content has been detected in a frame, no annotations - are present for that frame. - - - Attributes: - frames: - All video frames where explicit content was detected. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1beta2.ExplicitContentAnnotation) - }, -) -_sym_db.RegisterMessage(ExplicitContentAnnotation) - -NormalizedBoundingBox = _reflection.GeneratedProtocolMessageType( - "NormalizedBoundingBox", - (_message.Message,), - { - "DESCRIPTOR": _NORMALIZEDBOUNDINGBOX, - "__module__": "google.cloud.videointelligence_v1beta2.proto.video_intelligence_pb2", - "__doc__": """Normalized bounding box. The normalized vertex coordinates are - relative to the original image. Range: [0, 1]. - - - Attributes: - left: - Left X coordinate. - top: - Top Y coordinate. - right: - Right X coordinate. - bottom: - Bottom Y coordinate. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1beta2.NormalizedBoundingBox) - }, -) -_sym_db.RegisterMessage(NormalizedBoundingBox) - -FaceSegment = _reflection.GeneratedProtocolMessageType( - "FaceSegment", - (_message.Message,), - { - "DESCRIPTOR": _FACESEGMENT, - "__module__": "google.cloud.videointelligence_v1beta2.proto.video_intelligence_pb2", - "__doc__": """Video segment level annotation results for face detection. - - - Attributes: - segment: - Video segment where a face was detected. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1beta2.FaceSegment) - }, -) -_sym_db.RegisterMessage(FaceSegment) - -FaceFrame = _reflection.GeneratedProtocolMessageType( - "FaceFrame", - (_message.Message,), - { - "DESCRIPTOR": _FACEFRAME, - "__module__": "google.cloud.videointelligence_v1beta2.proto.video_intelligence_pb2", - "__doc__": """Video frame level annotation results for face detection. - - - Attributes: - normalized_bounding_boxes: - Normalized Bounding boxes in a frame. There can be more than - one boxes if the same face is detected in multiple locations - within the current frame. - time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the video frame for this location. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1beta2.FaceFrame) - }, -) -_sym_db.RegisterMessage(FaceFrame) - -FaceAnnotation = _reflection.GeneratedProtocolMessageType( - "FaceAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _FACEANNOTATION, - "__module__": "google.cloud.videointelligence_v1beta2.proto.video_intelligence_pb2", - "__doc__": """Face annotation. - - - Attributes: - thumbnail: - Thumbnail of a representative face view (in JPEG format). - segments: - All video segments where a face was detected. - frames: - All video frames where a face was detected. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1beta2.FaceAnnotation) - }, -) -_sym_db.RegisterMessage(FaceAnnotation) - -VideoAnnotationResults = _reflection.GeneratedProtocolMessageType( - "VideoAnnotationResults", - (_message.Message,), - { - "DESCRIPTOR": _VIDEOANNOTATIONRESULTS, - "__module__": "google.cloud.videointelligence_v1beta2.proto.video_intelligence_pb2", - "__doc__": """Annotation results for a single video. - - - Attributes: - input_uri: - Video file location in `Google Cloud Storage - `__. - segment_label_annotations: - Label annotations on video level or user specified segment - level. There is exactly one element for each unique label. - shot_label_annotations: - Label annotations on shot level. There is exactly one element - for each unique label. - frame_label_annotations: - Label annotations on frame level. There is exactly one element - for each unique label. - face_annotations: - Face annotations. There is exactly one element for each unique - face. - shot_annotations: - Shot annotations. Each shot is represented as a video segment. - explicit_annotation: - Explicit content annotation. - error: - If set, indicates an error. Note that for a single - ``AnnotateVideoRequest`` some videos may succeed and some may - fail. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1beta2.VideoAnnotationResults) - }, -) -_sym_db.RegisterMessage(VideoAnnotationResults) - -AnnotateVideoResponse = _reflection.GeneratedProtocolMessageType( - "AnnotateVideoResponse", - (_message.Message,), - { - "DESCRIPTOR": _ANNOTATEVIDEORESPONSE, - "__module__": "google.cloud.videointelligence_v1beta2.proto.video_intelligence_pb2", - "__doc__": """Video annotation response. Included in the ``response`` field of the - ``Operation`` returned by the ``GetOperation`` call of the - ``google::longrunning::Operations`` service. - - - Attributes: - annotation_results: - Annotation results for all videos specified in - ``AnnotateVideoRequest``. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1beta2.AnnotateVideoResponse) - }, -) -_sym_db.RegisterMessage(AnnotateVideoResponse) - -VideoAnnotationProgress = _reflection.GeneratedProtocolMessageType( - "VideoAnnotationProgress", - (_message.Message,), - { - "DESCRIPTOR": _VIDEOANNOTATIONPROGRESS, - "__module__": "google.cloud.videointelligence_v1beta2.proto.video_intelligence_pb2", - "__doc__": """Annotation progress for a single video. - - - Attributes: - input_uri: - Video file location in `Google Cloud Storage - `__. - progress_percent: - Approximate percentage processed thus far. Guaranteed to be - 100 when fully processed. - start_time: - Time when the request was received. - update_time: - Time of the most recent update. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1beta2.VideoAnnotationProgress) - }, -) -_sym_db.RegisterMessage(VideoAnnotationProgress) - -AnnotateVideoProgress = _reflection.GeneratedProtocolMessageType( - "AnnotateVideoProgress", - (_message.Message,), - { - "DESCRIPTOR": _ANNOTATEVIDEOPROGRESS, - "__module__": "google.cloud.videointelligence_v1beta2.proto.video_intelligence_pb2", - "__doc__": """Video annotation progress. Included in the ``metadata`` field of the - ``Operation`` returned by the ``GetOperation`` call of the - ``google::longrunning::Operations`` service. - - - Attributes: - annotation_progress: - Progress metadata for all videos specified in - ``AnnotateVideoRequest``. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1beta2.AnnotateVideoProgress) - }, -) -_sym_db.RegisterMessage(AnnotateVideoProgress) - - -DESCRIPTOR._options = None -_ANNOTATEVIDEOREQUEST.fields_by_name["features"]._options = None -_ANNOTATEVIDEOREQUEST.fields_by_name["output_uri"]._options = None -_ANNOTATEVIDEOREQUEST.fields_by_name["location_id"]._options = None - -_VIDEOINTELLIGENCESERVICE = _descriptor.ServiceDescriptor( - name="VideoIntelligenceService", - full_name="google.cloud.videointelligence.v1beta2.VideoIntelligenceService", - file=DESCRIPTOR, - index=0, - serialized_options=b"\312A videointelligence.googleapis.com\322A.https://www.googleapis.com/auth/cloud-platform", - create_key=_descriptor._internal_create_key, - serialized_start=4250, - serialized_end=4580, - methods=[ - _descriptor.MethodDescriptor( - name="AnnotateVideo", - full_name="google.cloud.videointelligence.v1beta2.VideoIntelligenceService.AnnotateVideo", - index=0, - containing_service=None, - input_type=_ANNOTATEVIDEOREQUEST, - output_type=google_dot_longrunning_dot_operations__pb2._OPERATION, - serialized_options=b'\202\323\344\223\002\035"\030/v1beta2/videos:annotate:\001*\332A\022input_uri,features\312A.\n\025AnnotateVideoResponse\022\025AnnotateVideoProgress', - create_key=_descriptor._internal_create_key, - ), - ], -) -_sym_db.RegisterServiceDescriptor(_VIDEOINTELLIGENCESERVICE) - -DESCRIPTOR.services_by_name["VideoIntelligenceService"] = _VIDEOINTELLIGENCESERVICE - -# @@protoc_insertion_point(module_scope) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/proto/video_intelligence_pb2_grpc.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/proto/video_intelligence_pb2_grpc.py deleted file mode 100644 index 1b7f01167c81..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/proto/video_intelligence_pb2_grpc.py +++ /dev/null @@ -1,92 +0,0 @@ -# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! -"""Client and server classes corresponding to protobuf-defined services.""" -import grpc - -from google.cloud.videointelligence_v1beta2.proto import ( - video_intelligence_pb2 as google_dot_cloud_dot_videointelligence__v1beta2_dot_proto_dot_video__intelligence__pb2, -) -from google.longrunning import ( - operations_pb2 as google_dot_longrunning_dot_operations__pb2, -) - - -class VideoIntelligenceServiceStub(object): - """Service that implements Google Cloud Video Intelligence API. - """ - - def __init__(self, channel): - """Constructor. - - Args: - channel: A grpc.Channel. - """ - self.AnnotateVideo = channel.unary_unary( - "/google.cloud.videointelligence.v1beta2.VideoIntelligenceService/AnnotateVideo", - request_serializer=google_dot_cloud_dot_videointelligence__v1beta2_dot_proto_dot_video__intelligence__pb2.AnnotateVideoRequest.SerializeToString, - response_deserializer=google_dot_longrunning_dot_operations__pb2.Operation.FromString, - ) - - -class VideoIntelligenceServiceServicer(object): - """Service that implements Google Cloud Video Intelligence API. - """ - - def AnnotateVideo(self, request, context): - """Performs asynchronous video annotation. Progress and results can be - retrieved through the `google.longrunning.Operations` interface. - `Operation.metadata` contains `AnnotateVideoProgress` (progress). - `Operation.response` contains `AnnotateVideoResponse` (results). - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - -def add_VideoIntelligenceServiceServicer_to_server(servicer, server): - rpc_method_handlers = { - "AnnotateVideo": grpc.unary_unary_rpc_method_handler( - servicer.AnnotateVideo, - request_deserializer=google_dot_cloud_dot_videointelligence__v1beta2_dot_proto_dot_video__intelligence__pb2.AnnotateVideoRequest.FromString, - response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString, - ), - } - generic_handler = grpc.method_handlers_generic_handler( - "google.cloud.videointelligence.v1beta2.VideoIntelligenceService", - rpc_method_handlers, - ) - server.add_generic_rpc_handlers((generic_handler,)) - - -# This class is part of an EXPERIMENTAL API. -class VideoIntelligenceService(object): - """Service that implements Google Cloud Video Intelligence API. - """ - - @staticmethod - def AnnotateVideo( - request, - target, - options=(), - channel_credentials=None, - call_credentials=None, - insecure=False, - compression=None, - wait_for_ready=None, - timeout=None, - metadata=None, - ): - return grpc.experimental.unary_unary( - request, - target, - "/google.cloud.videointelligence.v1beta2.VideoIntelligenceService/AnnotateVideo", - google_dot_cloud_dot_videointelligence__v1beta2_dot_proto_dot_video__intelligence__pb2.AnnotateVideoRequest.SerializeToString, - google_dot_longrunning_dot_operations__pb2.Operation.FromString, - options, - channel_credentials, - insecure, - call_credentials, - compression, - wait_for_ready, - timeout, - metadata, - ) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/py.typed b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/py.typed new file mode 100644 index 000000000000..e7fb166bf3e5 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/py.typed @@ -0,0 +1,2 @@ +# Marker file for PEP 561. +# The google-cloud-videointelligence package uses inline types. diff --git a/packages/google-cloud-videointelligence/google/cloud/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/__init__.py similarity index 71% rename from packages/google-cloud-videointelligence/google/cloud/__init__.py rename to packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/__init__.py index 9a1b64a6d586..42ffdf2bc43d 100644 --- a/packages/google-cloud-videointelligence/google/cloud/__init__.py +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/__init__.py @@ -1,24 +1,16 @@ # -*- coding: utf-8 -*- -# + # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # -# https://www.apache.org/licenses/LICENSE-2.0 +# http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. - -try: - import pkg_resources - - pkg_resources.declare_namespace(__name__) -except ImportError: - import pkgutil - - __path__ = pkgutil.extend_path(__path__, __name__) +# diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/__init__.py new file mode 100644 index 000000000000..559320286ca2 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/__init__.py @@ -0,0 +1,24 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from .client import VideoIntelligenceServiceClient +from .async_client import VideoIntelligenceServiceAsyncClient + +__all__ = ( + "VideoIntelligenceServiceClient", + "VideoIntelligenceServiceAsyncClient", +) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/async_client.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/async_client.py new file mode 100644 index 000000000000..788c95736cd6 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/async_client.py @@ -0,0 +1,270 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from collections import OrderedDict +import functools +import re +from typing import Dict, Sequence, Tuple, Type, Union +import pkg_resources + +import google.api_core.client_options as ClientOptions # type: ignore +from google.api_core import exceptions # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google.api_core import retry as retries # type: ignore +from google.auth import credentials # type: ignore +from google.oauth2 import service_account # type: ignore + +from google.api_core import operation # type: ignore +from google.api_core import operation_async # type: ignore +from google.cloud.videointelligence_v1beta2.types import video_intelligence + +from .transports.base import VideoIntelligenceServiceTransport, DEFAULT_CLIENT_INFO +from .transports.grpc_asyncio import VideoIntelligenceServiceGrpcAsyncIOTransport +from .client import VideoIntelligenceServiceClient + + +class VideoIntelligenceServiceAsyncClient: + """Service that implements Google Cloud Video Intelligence API.""" + + _client: VideoIntelligenceServiceClient + + DEFAULT_ENDPOINT = VideoIntelligenceServiceClient.DEFAULT_ENDPOINT + DEFAULT_MTLS_ENDPOINT = VideoIntelligenceServiceClient.DEFAULT_MTLS_ENDPOINT + + common_billing_account_path = staticmethod( + VideoIntelligenceServiceClient.common_billing_account_path + ) + parse_common_billing_account_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_billing_account_path + ) + + common_folder_path = staticmethod(VideoIntelligenceServiceClient.common_folder_path) + parse_common_folder_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_folder_path + ) + + common_organization_path = staticmethod( + VideoIntelligenceServiceClient.common_organization_path + ) + parse_common_organization_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_organization_path + ) + + common_project_path = staticmethod( + VideoIntelligenceServiceClient.common_project_path + ) + parse_common_project_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_project_path + ) + + common_location_path = staticmethod( + VideoIntelligenceServiceClient.common_location_path + ) + parse_common_location_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_location_path + ) + + from_service_account_file = VideoIntelligenceServiceClient.from_service_account_file + from_service_account_json = from_service_account_file + + @property + def transport(self) -> VideoIntelligenceServiceTransport: + """Return the transport used by the client instance. + + Returns: + VideoIntelligenceServiceTransport: The transport used by the client instance. + """ + return self._client.transport + + get_transport_class = functools.partial( + type(VideoIntelligenceServiceClient).get_transport_class, + type(VideoIntelligenceServiceClient), + ) + + def __init__( + self, + *, + credentials: credentials.Credentials = None, + transport: Union[str, VideoIntelligenceServiceTransport] = "grpc_asyncio", + client_options: ClientOptions = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the video intelligence service client. + + Args: + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + transport (Union[str, ~.VideoIntelligenceServiceTransport]): The + transport to use. If set to None, a transport is chosen + automatically. + client_options (ClientOptions): Custom options for the client. It + won't take effect if a ``transport`` instance is provided. + (1) The ``api_endpoint`` property can be used to override the + default endpoint provided by the client. GOOGLE_API_USE_MTLS_ENDPOINT + environment variable can also be used to override the endpoint: + "always" (always use the default mTLS endpoint), "never" (always + use the default regular endpoint) and "auto" (auto switch to the + default mTLS endpoint if client certificate is present, this is + the default value). However, the ``api_endpoint`` property takes + precedence if provided. + (2) If GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable + is "true", then the ``client_cert_source`` property can be used + to provide client certificate for mutual TLS transport. If + not provided, the default SSL client certificate will be used if + present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not + set, no client certificate will be used. + + Raises: + google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport + creation failed for any reason. + """ + + self._client = VideoIntelligenceServiceClient( + credentials=credentials, + transport=transport, + client_options=client_options, + client_info=client_info, + ) + + async def annotate_video( + self, + request: video_intelligence.AnnotateVideoRequest = None, + *, + input_uri: str = None, + features: Sequence[video_intelligence.Feature] = None, + retry: retries.Retry = gapic_v1.method.DEFAULT, + timeout: float = None, + metadata: Sequence[Tuple[str, str]] = (), + ) -> operation_async.AsyncOperation: + r"""Performs asynchronous video annotation. Progress and results can + be retrieved through the ``google.longrunning.Operations`` + interface. ``Operation.metadata`` contains + ``AnnotateVideoProgress`` (progress). ``Operation.response`` + contains ``AnnotateVideoResponse`` (results). + + Args: + request (:class:`~.video_intelligence.AnnotateVideoRequest`): + The request object. Video annotation request. + input_uri (:class:`str`): + Input video location. Currently, only `Google Cloud + Storage `__ URIs are + supported, which must be specified in the following + format: ``gs://bucket-id/object-id`` (other URI formats + return + [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). + For more information, see `Request + URIs `__. + A video URI may include wildcards in ``object-id``, and + thus identify multiple videos. Supported wildcards: '*' + to match 0 or more characters; '?' to match 1 character. + If unset, the input video should be embedded in the + request as ``input_content``. If set, ``input_content`` + should be unset. + This corresponds to the ``input_uri`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + features (:class:`Sequence[~.video_intelligence.Feature]`): + Required. Requested video annotation + features. + This corresponds to the ``features`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, str]]): Strings which should be + sent along with the request as metadata. + + Returns: + ~.operation_async.AsyncOperation: + An object representing a long-running operation. + + The result type for the operation will be + :class:``~.video_intelligence.AnnotateVideoResponse``: + Video annotation response. Included in the ``response`` + field of the ``Operation`` returned by the + ``GetOperation`` call of the + ``google::longrunning::Operations`` service. + + """ + # Create or coerce a protobuf request object. + # Sanity check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + has_flattened_params = any([input_uri, features]) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + request = video_intelligence.AnnotateVideoRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + + if input_uri is not None: + request.input_uri = input_uri + + if features: + request.features.extend(features) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = gapic_v1.method_async.wrap_method( + self._client._transport.annotate_video, + default_retry=retries.Retry( + initial=1.0, + maximum=120.0, + multiplier=2.5, + predicate=retries.if_exception_type( + exceptions.DeadlineExceeded, exceptions.ServiceUnavailable, + ), + ), + default_timeout=600.0, + client_info=DEFAULT_CLIENT_INFO, + ) + + # Send the request. + response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,) + + # Wrap the response in an operation future. + response = operation_async.from_gapic( + response, + self._client._transport.operations_client, + video_intelligence.AnnotateVideoResponse, + metadata_type=video_intelligence.AnnotateVideoProgress, + ) + + # Done; return the response. + return response + + +try: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=pkg_resources.get_distribution( + "google-cloud-videointelligence", + ).version, + ) +except pkg_resources.DistributionNotFound: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo() + + +__all__ = ("VideoIntelligenceServiceAsyncClient",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/client.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/client.py new file mode 100644 index 000000000000..c8df1586cb70 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/client.py @@ -0,0 +1,438 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from collections import OrderedDict +from distutils import util +import os +import re +from typing import Callable, Dict, Optional, Sequence, Tuple, Type, Union +import pkg_resources + +from google.api_core import client_options as client_options_lib # type: ignore +from google.api_core import exceptions # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google.api_core import retry as retries # type: ignore +from google.auth import credentials # type: ignore +from google.auth.transport import mtls # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore +from google.auth.exceptions import MutualTLSChannelError # type: ignore +from google.oauth2 import service_account # type: ignore + +from google.api_core import operation # type: ignore +from google.api_core import operation_async # type: ignore +from google.cloud.videointelligence_v1beta2.types import video_intelligence + +from .transports.base import VideoIntelligenceServiceTransport, DEFAULT_CLIENT_INFO +from .transports.grpc import VideoIntelligenceServiceGrpcTransport +from .transports.grpc_asyncio import VideoIntelligenceServiceGrpcAsyncIOTransport + + +class VideoIntelligenceServiceClientMeta(type): + """Metaclass for the VideoIntelligenceService client. + + This provides class-level methods for building and retrieving + support objects (e.g. transport) without polluting the client instance + objects. + """ + + _transport_registry = ( + OrderedDict() + ) # type: Dict[str, Type[VideoIntelligenceServiceTransport]] + _transport_registry["grpc"] = VideoIntelligenceServiceGrpcTransport + _transport_registry["grpc_asyncio"] = VideoIntelligenceServiceGrpcAsyncIOTransport + + def get_transport_class( + cls, label: str = None, + ) -> Type[VideoIntelligenceServiceTransport]: + """Return an appropriate transport class. + + Args: + label: The name of the desired transport. If none is + provided, then the first transport in the registry is used. + + Returns: + The transport class to use. + """ + # If a specific transport is requested, return that one. + if label: + return cls._transport_registry[label] + + # No transport is requested; return the default (that is, the first one + # in the dictionary). + return next(iter(cls._transport_registry.values())) + + +class VideoIntelligenceServiceClient(metaclass=VideoIntelligenceServiceClientMeta): + """Service that implements Google Cloud Video Intelligence API.""" + + @staticmethod + def _get_default_mtls_endpoint(api_endpoint): + """Convert api endpoint to mTLS endpoint. + Convert "*.sandbox.googleapis.com" and "*.googleapis.com" to + "*.mtls.sandbox.googleapis.com" and "*.mtls.googleapis.com" respectively. + Args: + api_endpoint (Optional[str]): the api endpoint to convert. + Returns: + str: converted mTLS api endpoint. + """ + if not api_endpoint: + return api_endpoint + + mtls_endpoint_re = re.compile( + r"(?P[^.]+)(?P\.mtls)?(?P\.sandbox)?(?P\.googleapis\.com)?" + ) + + m = mtls_endpoint_re.match(api_endpoint) + name, mtls, sandbox, googledomain = m.groups() + if mtls or not googledomain: + return api_endpoint + + if sandbox: + return api_endpoint.replace( + "sandbox.googleapis.com", "mtls.sandbox.googleapis.com" + ) + + return api_endpoint.replace(".googleapis.com", ".mtls.googleapis.com") + + DEFAULT_ENDPOINT = "videointelligence.googleapis.com" + DEFAULT_MTLS_ENDPOINT = _get_default_mtls_endpoint.__func__( # type: ignore + DEFAULT_ENDPOINT + ) + + @classmethod + def from_service_account_file(cls, filename: str, *args, **kwargs): + """Creates an instance of this client using the provided credentials + file. + + Args: + filename (str): The path to the service account private key json + file. + args: Additional arguments to pass to the constructor. + kwargs: Additional arguments to pass to the constructor. + + Returns: + {@api.name}: The constructed client. + """ + credentials = service_account.Credentials.from_service_account_file(filename) + kwargs["credentials"] = credentials + return cls(*args, **kwargs) + + from_service_account_json = from_service_account_file + + @property + def transport(self) -> VideoIntelligenceServiceTransport: + """Return the transport used by the client instance. + + Returns: + VideoIntelligenceServiceTransport: The transport used by the client instance. + """ + return self._transport + + @staticmethod + def common_billing_account_path(billing_account: str,) -> str: + """Return a fully-qualified billing_account string.""" + return "billingAccounts/{billing_account}".format( + billing_account=billing_account, + ) + + @staticmethod + def parse_common_billing_account_path(path: str) -> Dict[str, str]: + """Parse a billing_account path into its component segments.""" + m = re.match(r"^billingAccounts/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_folder_path(folder: str,) -> str: + """Return a fully-qualified folder string.""" + return "folders/{folder}".format(folder=folder,) + + @staticmethod + def parse_common_folder_path(path: str) -> Dict[str, str]: + """Parse a folder path into its component segments.""" + m = re.match(r"^folders/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_organization_path(organization: str,) -> str: + """Return a fully-qualified organization string.""" + return "organizations/{organization}".format(organization=organization,) + + @staticmethod + def parse_common_organization_path(path: str) -> Dict[str, str]: + """Parse a organization path into its component segments.""" + m = re.match(r"^organizations/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_project_path(project: str,) -> str: + """Return a fully-qualified project string.""" + return "projects/{project}".format(project=project,) + + @staticmethod + def parse_common_project_path(path: str) -> Dict[str, str]: + """Parse a project path into its component segments.""" + m = re.match(r"^projects/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_location_path(project: str, location: str,) -> str: + """Return a fully-qualified location string.""" + return "projects/{project}/locations/{location}".format( + project=project, location=location, + ) + + @staticmethod + def parse_common_location_path(path: str) -> Dict[str, str]: + """Parse a location path into its component segments.""" + m = re.match(r"^projects/(?P.+?)/locations/(?P.+?)$", path) + return m.groupdict() if m else {} + + def __init__( + self, + *, + credentials: Optional[credentials.Credentials] = None, + transport: Union[str, VideoIntelligenceServiceTransport, None] = None, + client_options: Optional[client_options_lib.ClientOptions] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the video intelligence service client. + + Args: + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + transport (Union[str, ~.VideoIntelligenceServiceTransport]): The + transport to use. If set to None, a transport is chosen + automatically. + client_options (client_options_lib.ClientOptions): Custom options for the + client. It won't take effect if a ``transport`` instance is provided. + (1) The ``api_endpoint`` property can be used to override the + default endpoint provided by the client. GOOGLE_API_USE_MTLS_ENDPOINT + environment variable can also be used to override the endpoint: + "always" (always use the default mTLS endpoint), "never" (always + use the default regular endpoint) and "auto" (auto switch to the + default mTLS endpoint if client certificate is present, this is + the default value). However, the ``api_endpoint`` property takes + precedence if provided. + (2) If GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable + is "true", then the ``client_cert_source`` property can be used + to provide client certificate for mutual TLS transport. If + not provided, the default SSL client certificate will be used if + present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not + set, no client certificate will be used. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If mutual TLS transport + creation failed for any reason. + """ + if isinstance(client_options, dict): + client_options = client_options_lib.from_dict(client_options) + if client_options is None: + client_options = client_options_lib.ClientOptions() + + # Create SSL credentials for mutual TLS if needed. + use_client_cert = bool( + util.strtobool(os.getenv("GOOGLE_API_USE_CLIENT_CERTIFICATE", "false")) + ) + + ssl_credentials = None + is_mtls = False + if use_client_cert: + if client_options.client_cert_source: + import grpc # type: ignore + + cert, key = client_options.client_cert_source() + ssl_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + is_mtls = True + else: + creds = SslCredentials() + is_mtls = creds.is_mtls + ssl_credentials = creds.ssl_credentials if is_mtls else None + + # Figure out which api endpoint to use. + if client_options.api_endpoint is not None: + api_endpoint = client_options.api_endpoint + else: + use_mtls_env = os.getenv("GOOGLE_API_USE_MTLS_ENDPOINT", "auto") + if use_mtls_env == "never": + api_endpoint = self.DEFAULT_ENDPOINT + elif use_mtls_env == "always": + api_endpoint = self.DEFAULT_MTLS_ENDPOINT + elif use_mtls_env == "auto": + api_endpoint = ( + self.DEFAULT_MTLS_ENDPOINT if is_mtls else self.DEFAULT_ENDPOINT + ) + else: + raise MutualTLSChannelError( + "Unsupported GOOGLE_API_USE_MTLS_ENDPOINT value. Accepted values: never, auto, always" + ) + + # Save or instantiate the transport. + # Ordinarily, we provide the transport, but allowing a custom transport + # instance provides an extensibility point for unusual situations. + if isinstance(transport, VideoIntelligenceServiceTransport): + # transport is a VideoIntelligenceServiceTransport instance. + if credentials or client_options.credentials_file: + raise ValueError( + "When providing a transport instance, " + "provide its credentials directly." + ) + if client_options.scopes: + raise ValueError( + "When providing a transport instance, " + "provide its scopes directly." + ) + self._transport = transport + else: + Transport = type(self).get_transport_class(transport) + self._transport = Transport( + credentials=credentials, + credentials_file=client_options.credentials_file, + host=api_endpoint, + scopes=client_options.scopes, + ssl_channel_credentials=ssl_credentials, + quota_project_id=client_options.quota_project_id, + client_info=client_info, + ) + + def annotate_video( + self, + request: video_intelligence.AnnotateVideoRequest = None, + *, + input_uri: str = None, + features: Sequence[video_intelligence.Feature] = None, + retry: retries.Retry = gapic_v1.method.DEFAULT, + timeout: float = None, + metadata: Sequence[Tuple[str, str]] = (), + ) -> operation.Operation: + r"""Performs asynchronous video annotation. Progress and results can + be retrieved through the ``google.longrunning.Operations`` + interface. ``Operation.metadata`` contains + ``AnnotateVideoProgress`` (progress). ``Operation.response`` + contains ``AnnotateVideoResponse`` (results). + + Args: + request (:class:`~.video_intelligence.AnnotateVideoRequest`): + The request object. Video annotation request. + input_uri (:class:`str`): + Input video location. Currently, only `Google Cloud + Storage `__ URIs are + supported, which must be specified in the following + format: ``gs://bucket-id/object-id`` (other URI formats + return + [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). + For more information, see `Request + URIs `__. + A video URI may include wildcards in ``object-id``, and + thus identify multiple videos. Supported wildcards: '*' + to match 0 or more characters; '?' to match 1 character. + If unset, the input video should be embedded in the + request as ``input_content``. If set, ``input_content`` + should be unset. + This corresponds to the ``input_uri`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + features (:class:`Sequence[~.video_intelligence.Feature]`): + Required. Requested video annotation + features. + This corresponds to the ``features`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, str]]): Strings which should be + sent along with the request as metadata. + + Returns: + ~.operation.Operation: + An object representing a long-running operation. + + The result type for the operation will be + :class:``~.video_intelligence.AnnotateVideoResponse``: + Video annotation response. Included in the ``response`` + field of the ``Operation`` returned by the + ``GetOperation`` call of the + ``google::longrunning::Operations`` service. + + """ + # Create or coerce a protobuf request object. + # Sanity check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + has_flattened_params = any([input_uri, features]) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # Minor optimization to avoid making a copy if the user passes + # in a video_intelligence.AnnotateVideoRequest. + # There's no risk of modifying the input as we've already verified + # there are no flattened fields. + if not isinstance(request, video_intelligence.AnnotateVideoRequest): + request = video_intelligence.AnnotateVideoRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + + if input_uri is not None: + request.input_uri = input_uri + + if features: + request.features.extend(features) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.annotate_video] + + # Send the request. + response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,) + + # Wrap the response in an operation future. + response = operation.from_gapic( + response, + self._transport.operations_client, + video_intelligence.AnnotateVideoResponse, + metadata_type=video_intelligence.AnnotateVideoProgress, + ) + + # Done; return the response. + return response + + +try: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=pkg_resources.get_distribution( + "google-cloud-videointelligence", + ).version, + ) +except pkg_resources.DistributionNotFound: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo() + + +__all__ = ("VideoIntelligenceServiceClient",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/transports/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/transports/__init__.py new file mode 100644 index 000000000000..08de302230ba --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/transports/__init__.py @@ -0,0 +1,38 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from collections import OrderedDict +from typing import Dict, Type + +from .base import VideoIntelligenceServiceTransport +from .grpc import VideoIntelligenceServiceGrpcTransport +from .grpc_asyncio import VideoIntelligenceServiceGrpcAsyncIOTransport + + +# Compile a registry of transports. +_transport_registry = ( + OrderedDict() +) # type: Dict[str, Type[VideoIntelligenceServiceTransport]] +_transport_registry["grpc"] = VideoIntelligenceServiceGrpcTransport +_transport_registry["grpc_asyncio"] = VideoIntelligenceServiceGrpcAsyncIOTransport + + +__all__ = ( + "VideoIntelligenceServiceTransport", + "VideoIntelligenceServiceGrpcTransport", + "VideoIntelligenceServiceGrpcAsyncIOTransport", +) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/transports/base.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/transports/base.py new file mode 100644 index 000000000000..3fea249eb44b --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/transports/base.py @@ -0,0 +1,141 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import abc +import typing +import pkg_resources + +from google import auth # type: ignore +from google.api_core import exceptions # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google.api_core import retry as retries # type: ignore +from google.api_core import operations_v1 # type: ignore +from google.auth import credentials # type: ignore + +from google.cloud.videointelligence_v1beta2.types import video_intelligence +from google.longrunning import operations_pb2 as operations # type: ignore + + +try: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=pkg_resources.get_distribution( + "google-cloud-videointelligence", + ).version, + ) +except pkg_resources.DistributionNotFound: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo() + + +class VideoIntelligenceServiceTransport(abc.ABC): + """Abstract transport class for VideoIntelligenceService.""" + + AUTH_SCOPES = ("https://www.googleapis.com/auth/cloud-platform",) + + def __init__( + self, + *, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: typing.Optional[str] = None, + scopes: typing.Optional[typing.Sequence[str]] = AUTH_SCOPES, + quota_project_id: typing.Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + **kwargs, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): The hostname to connect to. + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is mutually exclusive with credentials. + scope (Optional[Sequence[str]]): A list of scopes. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + """ + # Save the hostname. Default to port 443 (HTTPS) if none is specified. + if ":" not in host: + host += ":443" + self._host = host + + # If no credentials are provided, then determine the appropriate + # defaults. + if credentials and credentials_file: + raise exceptions.DuplicateCredentialArgs( + "'credentials_file' and 'credentials' are mutually exclusive" + ) + + if credentials_file is not None: + credentials, _ = auth.load_credentials_from_file( + credentials_file, scopes=scopes, quota_project_id=quota_project_id + ) + + elif credentials is None: + credentials, _ = auth.default( + scopes=scopes, quota_project_id=quota_project_id + ) + + # Save the credentials. + self._credentials = credentials + + # Lifted into its own function so it can be stubbed out during tests. + self._prep_wrapped_messages(client_info) + + def _prep_wrapped_messages(self, client_info): + # Precompute the wrapped methods. + self._wrapped_methods = { + self.annotate_video: gapic_v1.method.wrap_method( + self.annotate_video, + default_retry=retries.Retry( + initial=1.0, + maximum=120.0, + multiplier=2.5, + predicate=retries.if_exception_type( + exceptions.DeadlineExceeded, exceptions.ServiceUnavailable, + ), + ), + default_timeout=600.0, + client_info=client_info, + ), + } + + @property + def operations_client(self) -> operations_v1.OperationsClient: + """Return the client designed to process long-running operations.""" + raise NotImplementedError() + + @property + def annotate_video( + self, + ) -> typing.Callable[ + [video_intelligence.AnnotateVideoRequest], + typing.Union[operations.Operation, typing.Awaitable[operations.Operation]], + ]: + raise NotImplementedError() + + +__all__ = ("VideoIntelligenceServiceTransport",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/transports/grpc.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/transports/grpc.py new file mode 100644 index 000000000000..03cd358b7805 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/transports/grpc.py @@ -0,0 +1,281 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import warnings +from typing import Callable, Dict, Optional, Sequence, Tuple + +from google.api_core import grpc_helpers # type: ignore +from google.api_core import operations_v1 # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google import auth # type: ignore +from google.auth import credentials # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore + +import grpc # type: ignore + +from google.cloud.videointelligence_v1beta2.types import video_intelligence +from google.longrunning import operations_pb2 as operations # type: ignore + +from .base import VideoIntelligenceServiceTransport, DEFAULT_CLIENT_INFO + + +class VideoIntelligenceServiceGrpcTransport(VideoIntelligenceServiceTransport): + """gRPC backend transport for VideoIntelligenceService. + + Service that implements Google Cloud Video Intelligence API. + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends protocol buffers over the wire using gRPC (which is built on + top of HTTP/2); the ``grpcio`` package must be installed. + """ + + _stubs: Dict[str, Callable] + + def __init__( + self, + *, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: str = None, + scopes: Sequence[str] = None, + channel: grpc.Channel = None, + api_mtls_endpoint: str = None, + client_cert_source: Callable[[], Tuple[bytes, bytes]] = None, + ssl_channel_credentials: grpc.ChannelCredentials = None, + quota_project_id: Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): The hostname to connect to. + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + This argument is ignored if ``channel`` is provided. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if ``channel`` is provided. + scopes (Optional(Sequence[str])): A list of scopes. This argument is + ignored if ``channel`` is provided. + channel (Optional[grpc.Channel]): A ``Channel`` instance through + which to make calls. + api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint. + If provided, it overrides the ``host`` argument and tries to create + a mutual TLS channel with client SSL credentials from + ``client_cert_source`` or applicatin default SSL credentials. + client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]): + Deprecated. A callback to provide client SSL certificate bytes and + private key bytes, both in PEM format. It is ignored if + ``api_mtls_endpoint`` is None. + ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials + for grpc channel. It is ignored if ``channel`` is provided. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If mutual TLS transport + creation failed for any reason. + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + self._ssl_channel_credentials = ssl_channel_credentials + + if channel: + # Sanity check: Ensure that channel and credentials are not both + # provided. + credentials = False + + # If a channel was explicitly provided, set it. + self._grpc_channel = channel + self._ssl_channel_credentials = None + elif api_mtls_endpoint: + warnings.warn( + "api_mtls_endpoint and client_cert_source are deprecated", + DeprecationWarning, + ) + + host = ( + api_mtls_endpoint + if ":" in api_mtls_endpoint + else api_mtls_endpoint + ":443" + ) + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # Create SSL credentials with client_cert_source or application + # default SSL credentials. + if client_cert_source: + cert, key = client_cert_source() + ssl_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + else: + ssl_credentials = SslCredentials().ssl_credentials + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + self._ssl_channel_credentials = ssl_credentials + else: + host = host if ":" in host else host + ":443" + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_channel_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + + self._stubs = {} # type: Dict[str, Callable] + + # Run the base constructor. + super().__init__( + host=host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + client_info=client_info, + ) + + @classmethod + def create_channel( + cls, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: str = None, + scopes: Optional[Sequence[str]] = None, + quota_project_id: Optional[str] = None, + **kwargs, + ) -> grpc.Channel: + """Create and return a gRPC channel object. + Args: + address (Optionsl[str]): The host for the channel to use. + credentials (Optional[~.Credentials]): The + authorization credentials to attach to requests. These + credentials identify this application to the service. If + none are specified, the client will attempt to ascertain + the credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is mutually exclusive with credentials. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + kwargs (Optional[dict]): Keyword arguments, which are passed to the + channel creation. + Returns: + grpc.Channel: A gRPC channel object. + + Raises: + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + scopes = scopes or cls.AUTH_SCOPES + return grpc_helpers.create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes, + quota_project_id=quota_project_id, + **kwargs, + ) + + @property + def grpc_channel(self) -> grpc.Channel: + """Return the channel designed to connect to this service. + """ + return self._grpc_channel + + @property + def operations_client(self) -> operations_v1.OperationsClient: + """Create the client designed to process long-running operations. + + This property caches on the instance; repeated calls return the same + client. + """ + # Sanity check: Only create a new client if we do not already have one. + if "operations_client" not in self.__dict__: + self.__dict__["operations_client"] = operations_v1.OperationsClient( + self.grpc_channel + ) + + # Return the client from cache. + return self.__dict__["operations_client"] + + @property + def annotate_video( + self, + ) -> Callable[[video_intelligence.AnnotateVideoRequest], operations.Operation]: + r"""Return a callable for the annotate video method over gRPC. + + Performs asynchronous video annotation. Progress and results can + be retrieved through the ``google.longrunning.Operations`` + interface. ``Operation.metadata`` contains + ``AnnotateVideoProgress`` (progress). ``Operation.response`` + contains ``AnnotateVideoResponse`` (results). + + Returns: + Callable[[~.AnnotateVideoRequest], + ~.Operation]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "annotate_video" not in self._stubs: + self._stubs["annotate_video"] = self.grpc_channel.unary_unary( + "/google.cloud.videointelligence.v1beta2.VideoIntelligenceService/AnnotateVideo", + request_serializer=video_intelligence.AnnotateVideoRequest.serialize, + response_deserializer=operations.Operation.FromString, + ) + return self._stubs["annotate_video"] + + +__all__ = ("VideoIntelligenceServiceGrpcTransport",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/transports/grpc_asyncio.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/transports/grpc_asyncio.py new file mode 100644 index 000000000000..46056cbc7d58 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/services/video_intelligence_service/transports/grpc_asyncio.py @@ -0,0 +1,287 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import warnings +from typing import Awaitable, Callable, Dict, Optional, Sequence, Tuple + +from google.api_core import gapic_v1 # type: ignore +from google.api_core import grpc_helpers_async # type: ignore +from google.api_core import operations_v1 # type: ignore +from google import auth # type: ignore +from google.auth import credentials # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore + +import grpc # type: ignore +from grpc.experimental import aio # type: ignore + +from google.cloud.videointelligence_v1beta2.types import video_intelligence +from google.longrunning import operations_pb2 as operations # type: ignore + +from .base import VideoIntelligenceServiceTransport, DEFAULT_CLIENT_INFO +from .grpc import VideoIntelligenceServiceGrpcTransport + + +class VideoIntelligenceServiceGrpcAsyncIOTransport(VideoIntelligenceServiceTransport): + """gRPC AsyncIO backend transport for VideoIntelligenceService. + + Service that implements Google Cloud Video Intelligence API. + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends protocol buffers over the wire using gRPC (which is built on + top of HTTP/2); the ``grpcio`` package must be installed. + """ + + _grpc_channel: aio.Channel + _stubs: Dict[str, Callable] = {} + + @classmethod + def create_channel( + cls, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + quota_project_id: Optional[str] = None, + **kwargs, + ) -> aio.Channel: + """Create and return a gRPC AsyncIO channel object. + Args: + address (Optional[str]): The host for the channel to use. + credentials (Optional[~.Credentials]): The + authorization credentials to attach to requests. These + credentials identify this application to the service. If + none are specified, the client will attempt to ascertain + the credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if ``channel`` is provided. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + kwargs (Optional[dict]): Keyword arguments, which are passed to the + channel creation. + Returns: + aio.Channel: A gRPC AsyncIO channel object. + """ + scopes = scopes or cls.AUTH_SCOPES + return grpc_helpers_async.create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes, + quota_project_id=quota_project_id, + **kwargs, + ) + + def __init__( + self, + *, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + channel: aio.Channel = None, + api_mtls_endpoint: str = None, + client_cert_source: Callable[[], Tuple[bytes, bytes]] = None, + ssl_channel_credentials: grpc.ChannelCredentials = None, + quota_project_id=None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): The hostname to connect to. + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + This argument is ignored if ``channel`` is provided. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if ``channel`` is provided. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + channel (Optional[aio.Channel]): A ``Channel`` instance through + which to make calls. + api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint. + If provided, it overrides the ``host`` argument and tries to create + a mutual TLS channel with client SSL credentials from + ``client_cert_source`` or applicatin default SSL credentials. + client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]): + Deprecated. A callback to provide client SSL certificate bytes and + private key bytes, both in PEM format. It is ignored if + ``api_mtls_endpoint`` is None. + ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials + for grpc channel. It is ignored if ``channel`` is provided. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport + creation failed for any reason. + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + self._ssl_channel_credentials = ssl_channel_credentials + + if channel: + # Sanity check: Ensure that channel and credentials are not both + # provided. + credentials = False + + # If a channel was explicitly provided, set it. + self._grpc_channel = channel + self._ssl_channel_credentials = None + elif api_mtls_endpoint: + warnings.warn( + "api_mtls_endpoint and client_cert_source are deprecated", + DeprecationWarning, + ) + + host = ( + api_mtls_endpoint + if ":" in api_mtls_endpoint + else api_mtls_endpoint + ":443" + ) + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # Create SSL credentials with client_cert_source or application + # default SSL credentials. + if client_cert_source: + cert, key = client_cert_source() + ssl_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + else: + ssl_credentials = SslCredentials().ssl_credentials + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + self._ssl_channel_credentials = ssl_credentials + else: + host = host if ":" in host else host + ":443" + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_channel_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + + # Run the base constructor. + super().__init__( + host=host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + client_info=client_info, + ) + + self._stubs = {} + + @property + def grpc_channel(self) -> aio.Channel: + """Create the channel designed to connect to this service. + + This property caches on the instance; repeated calls return + the same channel. + """ + # Return the channel from cache. + return self._grpc_channel + + @property + def operations_client(self) -> operations_v1.OperationsAsyncClient: + """Create the client designed to process long-running operations. + + This property caches on the instance; repeated calls return the same + client. + """ + # Sanity check: Only create a new client if we do not already have one. + if "operations_client" not in self.__dict__: + self.__dict__["operations_client"] = operations_v1.OperationsAsyncClient( + self.grpc_channel + ) + + # Return the client from cache. + return self.__dict__["operations_client"] + + @property + def annotate_video( + self, + ) -> Callable[ + [video_intelligence.AnnotateVideoRequest], Awaitable[operations.Operation] + ]: + r"""Return a callable for the annotate video method over gRPC. + + Performs asynchronous video annotation. Progress and results can + be retrieved through the ``google.longrunning.Operations`` + interface. ``Operation.metadata`` contains + ``AnnotateVideoProgress`` (progress). ``Operation.response`` + contains ``AnnotateVideoResponse`` (results). + + Returns: + Callable[[~.AnnotateVideoRequest], + Awaitable[~.Operation]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "annotate_video" not in self._stubs: + self._stubs["annotate_video"] = self.grpc_channel.unary_unary( + "/google.cloud.videointelligence.v1beta2.VideoIntelligenceService/AnnotateVideo", + request_serializer=video_intelligence.AnnotateVideoRequest.serialize, + response_deserializer=operations.Operation.FromString, + ) + return self._stubs["annotate_video"] + + +__all__ = ("VideoIntelligenceServiceGrpcAsyncIOTransport",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/types.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/types.py deleted file mode 100644 index 69366e7c8e43..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/types.py +++ /dev/null @@ -1,56 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from __future__ import absolute_import -import sys - -from google.api_core.protobuf_helpers import get_messages - -from google.cloud.videointelligence_v1beta2.proto import video_intelligence_pb2 -from google.longrunning import operations_pb2 -from google.protobuf import any_pb2 -from google.protobuf import duration_pb2 -from google.protobuf import timestamp_pb2 -from google.rpc import status_pb2 - - -_shared_modules = [ - operations_pb2, - any_pb2, - duration_pb2, - timestamp_pb2, - status_pb2, -] - -_local_modules = [ - video_intelligence_pb2, -] - -names = [] - -for module in _shared_modules: # pragma: NO COVER - for name, message in get_messages(module).items(): - setattr(sys.modules[__name__], name, message) - names.append(name) -for module in _local_modules: - for name, message in get_messages(module).items(): - message.__module__ = "google.cloud.videointelligence_v1beta2.types" - setattr(sys.modules[__name__], name, message) - names.append(name) - - -__all__ = tuple(sorted(names)) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/types/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/types/__init__.py new file mode 100644 index 000000000000..20cc522465cc --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/types/__init__.py @@ -0,0 +1,65 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from .video_intelligence import ( + AnnotateVideoRequest, + VideoContext, + LabelDetectionConfig, + ShotChangeDetectionConfig, + ExplicitContentDetectionConfig, + FaceDetectionConfig, + VideoSegment, + LabelSegment, + LabelFrame, + Entity, + LabelAnnotation, + ExplicitContentFrame, + ExplicitContentAnnotation, + NormalizedBoundingBox, + FaceSegment, + FaceFrame, + FaceAnnotation, + VideoAnnotationResults, + AnnotateVideoResponse, + VideoAnnotationProgress, + AnnotateVideoProgress, +) + + +__all__ = ( + "AnnotateVideoRequest", + "VideoContext", + "LabelDetectionConfig", + "ShotChangeDetectionConfig", + "ExplicitContentDetectionConfig", + "FaceDetectionConfig", + "VideoSegment", + "LabelSegment", + "LabelFrame", + "Entity", + "LabelAnnotation", + "ExplicitContentFrame", + "ExplicitContentAnnotation", + "NormalizedBoundingBox", + "FaceSegment", + "FaceFrame", + "FaceAnnotation", + "VideoAnnotationResults", + "AnnotateVideoResponse", + "VideoAnnotationProgress", + "AnnotateVideoProgress", +) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/types/video_intelligence.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/types/video_intelligence.py new file mode 100644 index 000000000000..36c0eb656890 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1beta2/types/video_intelligence.py @@ -0,0 +1,577 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import proto # type: ignore + + +from google.protobuf import duration_pb2 as duration # type: ignore +from google.protobuf import timestamp_pb2 as timestamp # type: ignore +from google.rpc import status_pb2 as status # type: ignore + + +__protobuf__ = proto.module( + package="google.cloud.videointelligence.v1beta2", + manifest={ + "Feature", + "LabelDetectionMode", + "Likelihood", + "AnnotateVideoRequest", + "VideoContext", + "LabelDetectionConfig", + "ShotChangeDetectionConfig", + "ExplicitContentDetectionConfig", + "FaceDetectionConfig", + "VideoSegment", + "LabelSegment", + "LabelFrame", + "Entity", + "LabelAnnotation", + "ExplicitContentFrame", + "ExplicitContentAnnotation", + "NormalizedBoundingBox", + "FaceSegment", + "FaceFrame", + "FaceAnnotation", + "VideoAnnotationResults", + "AnnotateVideoResponse", + "VideoAnnotationProgress", + "AnnotateVideoProgress", + }, +) + + +class Feature(proto.Enum): + r"""Video annotation feature.""" + FEATURE_UNSPECIFIED = 0 + LABEL_DETECTION = 1 + SHOT_CHANGE_DETECTION = 2 + EXPLICIT_CONTENT_DETECTION = 3 + FACE_DETECTION = 4 + + +class LabelDetectionMode(proto.Enum): + r"""Label detection mode.""" + LABEL_DETECTION_MODE_UNSPECIFIED = 0 + SHOT_MODE = 1 + FRAME_MODE = 2 + SHOT_AND_FRAME_MODE = 3 + + +class Likelihood(proto.Enum): + r"""Bucketized representation of likelihood.""" + LIKELIHOOD_UNSPECIFIED = 0 + VERY_UNLIKELY = 1 + UNLIKELY = 2 + POSSIBLE = 3 + LIKELY = 4 + VERY_LIKELY = 5 + + +class AnnotateVideoRequest(proto.Message): + r"""Video annotation request. + + Attributes: + input_uri (str): + Input video location. Currently, only `Google Cloud + Storage `__ URIs are + supported, which must be specified in the following format: + ``gs://bucket-id/object-id`` (other URI formats return + [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). + For more information, see `Request + URIs `__. + A video URI may include wildcards in ``object-id``, and thus + identify multiple videos. Supported wildcards: '*' to match + 0 or more characters; '?' to match 1 character. If unset, + the input video should be embedded in the request as + ``input_content``. If set, ``input_content`` should be + unset. + input_content (bytes): + The video data bytes. If unset, the input video(s) should be + specified via ``input_uri``. If set, ``input_uri`` should be + unset. + features (Sequence[~.video_intelligence.Feature]): + Required. Requested video annotation + features. + video_context (~.video_intelligence.VideoContext): + Additional video context and/or feature- + pecific parameters. + output_uri (str): + Optional. Location where the output (in JSON format) should + be stored. Currently, only `Google Cloud + Storage `__ URIs are + supported, which must be specified in the following format: + ``gs://bucket-id/object-id`` (other URI formats return + [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). + For more information, see `Request + URIs `__. + location_id (str): + Optional. Cloud region where annotation should take place. + Supported cloud regions: ``us-east1``, ``us-west1``, + ``europe-west1``, ``asia-east1``. If no region is specified, + a region will be determined based on video file location. + """ + + input_uri = proto.Field(proto.STRING, number=1) + + input_content = proto.Field(proto.BYTES, number=6) + + features = proto.RepeatedField(proto.ENUM, number=2, enum="Feature",) + + video_context = proto.Field(proto.MESSAGE, number=3, message="VideoContext",) + + output_uri = proto.Field(proto.STRING, number=4) + + location_id = proto.Field(proto.STRING, number=5) + + +class VideoContext(proto.Message): + r"""Video context and/or feature-specific parameters. + + Attributes: + segments (Sequence[~.video_intelligence.VideoSegment]): + Video segments to annotate. The segments may + overlap and are not required to be contiguous or + span the whole video. If unspecified, each video + is treated as a single segment. + label_detection_config (~.video_intelligence.LabelDetectionConfig): + Config for LABEL_DETECTION. + shot_change_detection_config (~.video_intelligence.ShotChangeDetectionConfig): + Config for SHOT_CHANGE_DETECTION. + explicit_content_detection_config (~.video_intelligence.ExplicitContentDetectionConfig): + Config for EXPLICIT_CONTENT_DETECTION. + face_detection_config (~.video_intelligence.FaceDetectionConfig): + Config for FACE_DETECTION. + """ + + segments = proto.RepeatedField(proto.MESSAGE, number=1, message="VideoSegment",) + + label_detection_config = proto.Field( + proto.MESSAGE, number=2, message="LabelDetectionConfig", + ) + + shot_change_detection_config = proto.Field( + proto.MESSAGE, number=3, message="ShotChangeDetectionConfig", + ) + + explicit_content_detection_config = proto.Field( + proto.MESSAGE, number=4, message="ExplicitContentDetectionConfig", + ) + + face_detection_config = proto.Field( + proto.MESSAGE, number=5, message="FaceDetectionConfig", + ) + + +class LabelDetectionConfig(proto.Message): + r"""Config for LABEL_DETECTION. + + Attributes: + label_detection_mode (~.video_intelligence.LabelDetectionMode): + What labels should be detected with LABEL_DETECTION, in + addition to video-level labels or segment-level labels. If + unspecified, defaults to ``SHOT_MODE``. + stationary_camera (bool): + Whether the video has been shot from a stationary (i.e. + non-moving) camera. When set to true, might improve + detection accuracy for moving objects. Should be used with + ``SHOT_AND_FRAME_MODE`` enabled. + model (str): + Model to use for label detection. + Supported values: "builtin/stable" (the default + if unset) and "builtin/latest". + """ + + label_detection_mode = proto.Field(proto.ENUM, number=1, enum="LabelDetectionMode",) + + stationary_camera = proto.Field(proto.BOOL, number=2) + + model = proto.Field(proto.STRING, number=3) + + +class ShotChangeDetectionConfig(proto.Message): + r"""Config for SHOT_CHANGE_DETECTION. + + Attributes: + model (str): + Model to use for shot change detection. + Supported values: "builtin/stable" (the default + if unset) and "builtin/latest". + """ + + model = proto.Field(proto.STRING, number=1) + + +class ExplicitContentDetectionConfig(proto.Message): + r"""Config for EXPLICIT_CONTENT_DETECTION. + + Attributes: + model (str): + Model to use for explicit content detection. + Supported values: "builtin/stable" (the default + if unset) and "builtin/latest". + """ + + model = proto.Field(proto.STRING, number=1) + + +class FaceDetectionConfig(proto.Message): + r"""Config for FACE_DETECTION. + + Attributes: + model (str): + Model to use for face detection. + Supported values: "builtin/stable" (the default + if unset) and "builtin/latest". + include_bounding_boxes (bool): + Whether bounding boxes be included in the + face annotation output. + """ + + model = proto.Field(proto.STRING, number=1) + + include_bounding_boxes = proto.Field(proto.BOOL, number=2) + + +class VideoSegment(proto.Message): + r"""Video segment. + + Attributes: + start_time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the start of the segment + (inclusive). + end_time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the end of the segment + (inclusive). + """ + + start_time_offset = proto.Field(proto.MESSAGE, number=1, message=duration.Duration,) + + end_time_offset = proto.Field(proto.MESSAGE, number=2, message=duration.Duration,) + + +class LabelSegment(proto.Message): + r"""Video segment level annotation results for label detection. + + Attributes: + segment (~.video_intelligence.VideoSegment): + Video segment where a label was detected. + confidence (float): + Confidence that the label is accurate. Range: [0, 1]. + """ + + segment = proto.Field(proto.MESSAGE, number=1, message="VideoSegment",) + + confidence = proto.Field(proto.FLOAT, number=2) + + +class LabelFrame(proto.Message): + r"""Video frame level annotation results for label detection. + + Attributes: + time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the video frame for this + location. + confidence (float): + Confidence that the label is accurate. Range: [0, 1]. + """ + + time_offset = proto.Field(proto.MESSAGE, number=1, message=duration.Duration,) + + confidence = proto.Field(proto.FLOAT, number=2) + + +class Entity(proto.Message): + r"""Detected entity from video analysis. + + Attributes: + entity_id (str): + Opaque entity ID. Some IDs may be available in `Google + Knowledge Graph Search + API `__. + description (str): + Textual description, e.g. ``Fixed-gear bicycle``. + language_code (str): + Language code for ``description`` in BCP-47 format. + """ + + entity_id = proto.Field(proto.STRING, number=1) + + description = proto.Field(proto.STRING, number=2) + + language_code = proto.Field(proto.STRING, number=3) + + +class LabelAnnotation(proto.Message): + r"""Label annotation. + + Attributes: + entity (~.video_intelligence.Entity): + Detected entity. + category_entities (Sequence[~.video_intelligence.Entity]): + Common categories for the detected entity. E.g. when the + label is ``Terrier`` the category is likely ``dog``. And in + some cases there might be more than one categories e.g. + ``Terrier`` could also be a ``pet``. + segments (Sequence[~.video_intelligence.LabelSegment]): + All video segments where a label was + detected. + frames (Sequence[~.video_intelligence.LabelFrame]): + All video frames where a label was detected. + """ + + entity = proto.Field(proto.MESSAGE, number=1, message="Entity",) + + category_entities = proto.RepeatedField(proto.MESSAGE, number=2, message="Entity",) + + segments = proto.RepeatedField(proto.MESSAGE, number=3, message="LabelSegment",) + + frames = proto.RepeatedField(proto.MESSAGE, number=4, message="LabelFrame",) + + +class ExplicitContentFrame(proto.Message): + r"""Video frame level annotation results for explicit content. + + Attributes: + time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the video frame for this + location. + pornography_likelihood (~.video_intelligence.Likelihood): + Likelihood of the pornography content.. + """ + + time_offset = proto.Field(proto.MESSAGE, number=1, message=duration.Duration,) + + pornography_likelihood = proto.Field(proto.ENUM, number=2, enum="Likelihood",) + + +class ExplicitContentAnnotation(proto.Message): + r"""Explicit content annotation (based on per-frame visual + signals only). If no explicit content has been detected in a + frame, no annotations are present for that frame. + + Attributes: + frames (Sequence[~.video_intelligence.ExplicitContentFrame]): + All video frames where explicit content was + detected. + """ + + frames = proto.RepeatedField( + proto.MESSAGE, number=1, message="ExplicitContentFrame", + ) + + +class NormalizedBoundingBox(proto.Message): + r"""Normalized bounding box. The normalized vertex coordinates are + relative to the original image. Range: [0, 1]. + + Attributes: + left (float): + Left X coordinate. + top (float): + Top Y coordinate. + right (float): + Right X coordinate. + bottom (float): + Bottom Y coordinate. + """ + + left = proto.Field(proto.FLOAT, number=1) + + top = proto.Field(proto.FLOAT, number=2) + + right = proto.Field(proto.FLOAT, number=3) + + bottom = proto.Field(proto.FLOAT, number=4) + + +class FaceSegment(proto.Message): + r"""Video segment level annotation results for face detection. + + Attributes: + segment (~.video_intelligence.VideoSegment): + Video segment where a face was detected. + """ + + segment = proto.Field(proto.MESSAGE, number=1, message="VideoSegment",) + + +class FaceFrame(proto.Message): + r"""Video frame level annotation results for face detection. + + Attributes: + normalized_bounding_boxes (Sequence[~.video_intelligence.NormalizedBoundingBox]): + Normalized Bounding boxes in a frame. + There can be more than one boxes if the same + face is detected in multiple locations within + the current frame. + time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the video frame for this + location. + """ + + normalized_bounding_boxes = proto.RepeatedField( + proto.MESSAGE, number=1, message="NormalizedBoundingBox", + ) + + time_offset = proto.Field(proto.MESSAGE, number=2, message=duration.Duration,) + + +class FaceAnnotation(proto.Message): + r"""Face annotation. + + Attributes: + thumbnail (bytes): + Thumbnail of a representative face view (in + JPEG format). + segments (Sequence[~.video_intelligence.FaceSegment]): + All video segments where a face was detected. + frames (Sequence[~.video_intelligence.FaceFrame]): + All video frames where a face was detected. + """ + + thumbnail = proto.Field(proto.BYTES, number=1) + + segments = proto.RepeatedField(proto.MESSAGE, number=2, message="FaceSegment",) + + frames = proto.RepeatedField(proto.MESSAGE, number=3, message="FaceFrame",) + + +class VideoAnnotationResults(proto.Message): + r"""Annotation results for a single video. + + Attributes: + input_uri (str): + Video file location in `Google Cloud + Storage `__. + segment_label_annotations (Sequence[~.video_intelligence.LabelAnnotation]): + Label annotations on video level or user + specified segment level. There is exactly one + element for each unique label. + shot_label_annotations (Sequence[~.video_intelligence.LabelAnnotation]): + Label annotations on shot level. + There is exactly one element for each unique + label. + frame_label_annotations (Sequence[~.video_intelligence.LabelAnnotation]): + Label annotations on frame level. + There is exactly one element for each unique + label. + face_annotations (Sequence[~.video_intelligence.FaceAnnotation]): + Face annotations. There is exactly one + element for each unique face. + shot_annotations (Sequence[~.video_intelligence.VideoSegment]): + Shot annotations. Each shot is represented as + a video segment. + explicit_annotation (~.video_intelligence.ExplicitContentAnnotation): + Explicit content annotation. + error (~.status.Status): + If set, indicates an error. Note that for a single + ``AnnotateVideoRequest`` some videos may succeed and some + may fail. + """ + + input_uri = proto.Field(proto.STRING, number=1) + + segment_label_annotations = proto.RepeatedField( + proto.MESSAGE, number=2, message="LabelAnnotation", + ) + + shot_label_annotations = proto.RepeatedField( + proto.MESSAGE, number=3, message="LabelAnnotation", + ) + + frame_label_annotations = proto.RepeatedField( + proto.MESSAGE, number=4, message="LabelAnnotation", + ) + + face_annotations = proto.RepeatedField( + proto.MESSAGE, number=5, message="FaceAnnotation", + ) + + shot_annotations = proto.RepeatedField( + proto.MESSAGE, number=6, message="VideoSegment", + ) + + explicit_annotation = proto.Field( + proto.MESSAGE, number=7, message="ExplicitContentAnnotation", + ) + + error = proto.Field(proto.MESSAGE, number=9, message=status.Status,) + + +class AnnotateVideoResponse(proto.Message): + r"""Video annotation response. Included in the ``response`` field of the + ``Operation`` returned by the ``GetOperation`` call of the + ``google::longrunning::Operations`` service. + + Attributes: + annotation_results (Sequence[~.video_intelligence.VideoAnnotationResults]): + Annotation results for all videos specified in + ``AnnotateVideoRequest``. + """ + + annotation_results = proto.RepeatedField( + proto.MESSAGE, number=1, message="VideoAnnotationResults", + ) + + +class VideoAnnotationProgress(proto.Message): + r"""Annotation progress for a single video. + + Attributes: + input_uri (str): + Video file location in `Google Cloud + Storage `__. + progress_percent (int): + Approximate percentage processed thus far. + Guaranteed to be 100 when fully processed. + start_time (~.timestamp.Timestamp): + Time when the request was received. + update_time (~.timestamp.Timestamp): + Time of the most recent update. + """ + + input_uri = proto.Field(proto.STRING, number=1) + + progress_percent = proto.Field(proto.INT32, number=2) + + start_time = proto.Field(proto.MESSAGE, number=3, message=timestamp.Timestamp,) + + update_time = proto.Field(proto.MESSAGE, number=4, message=timestamp.Timestamp,) + + +class AnnotateVideoProgress(proto.Message): + r"""Video annotation progress. Included in the ``metadata`` field of the + ``Operation`` returned by the ``GetOperation`` call of the + ``google::longrunning::Operations`` service. + + Attributes: + annotation_progress (Sequence[~.video_intelligence.VideoAnnotationProgress]): + Progress metadata for all videos specified in + ``AnnotateVideoRequest``. + """ + + annotation_progress = proto.RepeatedField( + proto.MESSAGE, number=1, message="VideoAnnotationProgress", + ) + + +__all__ = tuple(sorted(__protobuf__.manifest)) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/__init__.py index 3cc1dbe0c4af..b3f78d1504ba 100644 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/__init__.py +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/__init__.py @@ -1,49 +1,71 @@ # -*- coding: utf-8 -*- -# + # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # -# https://www.apache.org/licenses/LICENSE-2.0 +# http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. +# - -from __future__ import absolute_import -import sys -import warnings - -from google.cloud.videointelligence_v1p1beta1 import types -from google.cloud.videointelligence_v1p1beta1.gapic import enums -from google.cloud.videointelligence_v1p1beta1.gapic import ( - video_intelligence_service_client, -) - - -if sys.version_info[:2] == (2, 7): - message = ( - "A future version of this library will drop support for Python 2.7. " - "More details about Python 2 support for Google Cloud Client Libraries " - "can be found at https://cloud.google.com/python/docs/python2-sunset/" - ) - warnings.warn(message, DeprecationWarning) - - -class VideoIntelligenceServiceClient( - video_intelligence_service_client.VideoIntelligenceServiceClient -): - __doc__ = video_intelligence_service_client.VideoIntelligenceServiceClient.__doc__ - enums = enums +from .services.video_intelligence_service import VideoIntelligenceServiceClient +from .types.video_intelligence import AnnotateVideoProgress +from .types.video_intelligence import AnnotateVideoRequest +from .types.video_intelligence import AnnotateVideoResponse +from .types.video_intelligence import Entity +from .types.video_intelligence import ExplicitContentAnnotation +from .types.video_intelligence import ExplicitContentDetectionConfig +from .types.video_intelligence import ExplicitContentFrame +from .types.video_intelligence import Feature +from .types.video_intelligence import LabelAnnotation +from .types.video_intelligence import LabelDetectionConfig +from .types.video_intelligence import LabelDetectionMode +from .types.video_intelligence import LabelFrame +from .types.video_intelligence import LabelSegment +from .types.video_intelligence import Likelihood +from .types.video_intelligence import ShotChangeDetectionConfig +from .types.video_intelligence import SpeechContext +from .types.video_intelligence import SpeechRecognitionAlternative +from .types.video_intelligence import SpeechTranscription +from .types.video_intelligence import SpeechTranscriptionConfig +from .types.video_intelligence import VideoAnnotationProgress +from .types.video_intelligence import VideoAnnotationResults +from .types.video_intelligence import VideoContext +from .types.video_intelligence import VideoSegment +from .types.video_intelligence import WordInfo __all__ = ( - "enums", - "types", + "AnnotateVideoProgress", + "AnnotateVideoRequest", + "AnnotateVideoResponse", + "Entity", + "ExplicitContentAnnotation", + "ExplicitContentDetectionConfig", + "ExplicitContentFrame", + "Feature", + "LabelAnnotation", + "LabelDetectionConfig", + "LabelDetectionMode", + "LabelFrame", + "LabelSegment", + "Likelihood", + "ShotChangeDetectionConfig", + "SpeechContext", + "SpeechRecognitionAlternative", + "SpeechTranscription", + "SpeechTranscriptionConfig", + "VideoAnnotationProgress", + "VideoAnnotationResults", + "VideoContext", + "VideoSegment", + "WordInfo", "VideoIntelligenceServiceClient", ) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/gapic/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/gapic/__init__.py deleted file mode 100644 index e69de29bb2d1..000000000000 diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/gapic/enums.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/gapic/enums.py deleted file mode 100644 index 337df8ee846a..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/gapic/enums.py +++ /dev/null @@ -1,76 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Wrappers for protocol buffer enum types.""" - -import enum - - -class Feature(enum.IntEnum): - """ - Video annotation feature. - - Attributes: - FEATURE_UNSPECIFIED (int): Unspecified. - LABEL_DETECTION (int): Label detection. Detect objects, such as dog or flower. - SHOT_CHANGE_DETECTION (int): Shot change detection. - EXPLICIT_CONTENT_DETECTION (int): Explicit content detection. - SPEECH_TRANSCRIPTION (int): Speech transcription. - """ - - FEATURE_UNSPECIFIED = 0 - LABEL_DETECTION = 1 - SHOT_CHANGE_DETECTION = 2 - EXPLICIT_CONTENT_DETECTION = 3 - SPEECH_TRANSCRIPTION = 6 - - -class LabelDetectionMode(enum.IntEnum): - """ - Label detection mode. - - Attributes: - LABEL_DETECTION_MODE_UNSPECIFIED (int): Unspecified. - SHOT_MODE (int): Detect shot-level labels. - FRAME_MODE (int): Detect frame-level labels. - SHOT_AND_FRAME_MODE (int): Detect both shot-level and frame-level labels. - """ - - LABEL_DETECTION_MODE_UNSPECIFIED = 0 - SHOT_MODE = 1 - FRAME_MODE = 2 - SHOT_AND_FRAME_MODE = 3 - - -class Likelihood(enum.IntEnum): - """ - Bucketized representation of likelihood. - - Attributes: - LIKELIHOOD_UNSPECIFIED (int): Unspecified likelihood. - VERY_UNLIKELY (int): Very unlikely. - UNLIKELY (int): Unlikely. - POSSIBLE (int): Possible. - LIKELY (int): Likely. - VERY_LIKELY (int): Very likely. - """ - - LIKELIHOOD_UNSPECIFIED = 0 - VERY_UNLIKELY = 1 - UNLIKELY = 2 - POSSIBLE = 3 - LIKELY = 4 - VERY_LIKELY = 5 diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/gapic/transports/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/gapic/transports/__init__.py deleted file mode 100644 index e69de29bb2d1..000000000000 diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/gapic/transports/video_intelligence_service_grpc_transport.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/gapic/transports/video_intelligence_service_grpc_transport.py deleted file mode 100644 index d67d7b025ac7..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/gapic/transports/video_intelligence_service_grpc_transport.py +++ /dev/null @@ -1,137 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import google.api_core.grpc_helpers -import google.api_core.operations_v1 - -from google.cloud.videointelligence_v1p1beta1.proto import video_intelligence_pb2_grpc - - -class VideoIntelligenceServiceGrpcTransport(object): - """gRPC transport class providing stubs for - google.cloud.videointelligence.v1p1beta1 VideoIntelligenceService API. - - The transport provides access to the raw gRPC stubs, - which can be used to take advantage of advanced - features of gRPC. - """ - - # The scopes needed to make gRPC calls to all of the methods defined - # in this service. - _OAUTH_SCOPES = ("https://www.googleapis.com/auth/cloud-platform",) - - def __init__( - self, - channel=None, - credentials=None, - address="videointelligence.googleapis.com:443", - ): - """Instantiate the transport class. - - Args: - channel (grpc.Channel): A ``Channel`` instance through - which to make calls. This argument is mutually exclusive - with ``credentials``; providing both will raise an exception. - credentials (google.auth.credentials.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If none - are specified, the client will attempt to ascertain the - credentials from the environment. - address (str): The address where the service is hosted. - """ - # If both `channel` and `credentials` are specified, raise an - # exception (channels come with credentials baked in already). - if channel is not None and credentials is not None: - raise ValueError( - "The `channel` and `credentials` arguments are mutually " "exclusive.", - ) - - # Create the channel. - if channel is None: - channel = self.create_channel( - address=address, - credentials=credentials, - options={ - "grpc.max_send_message_length": -1, - "grpc.max_receive_message_length": -1, - }.items(), - ) - - self._channel = channel - - # gRPC uses objects called "stubs" that are bound to the - # channel and provide a basic method for each RPC. - self._stubs = { - "video_intelligence_service_stub": video_intelligence_pb2_grpc.VideoIntelligenceServiceStub( - channel - ), - } - - # Because this API includes a method that returns a - # long-running operation (proto: google.longrunning.Operation), - # instantiate an LRO client. - self._operations_client = google.api_core.operations_v1.OperationsClient( - channel - ) - - @classmethod - def create_channel( - cls, address="videointelligence.googleapis.com:443", credentials=None, **kwargs - ): - """Create and return a gRPC channel object. - - Args: - address (str): The host for the channel to use. - credentials (~.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If - none are specified, the client will attempt to ascertain - the credentials from the environment. - kwargs (dict): Keyword arguments, which are passed to the - channel creation. - - Returns: - grpc.Channel: A gRPC channel object. - """ - return google.api_core.grpc_helpers.create_channel( - address, credentials=credentials, scopes=cls._OAUTH_SCOPES, **kwargs - ) - - @property - def channel(self): - """The gRPC channel used by the transport. - - Returns: - grpc.Channel: A gRPC channel object. - """ - return self._channel - - @property - def annotate_video(self): - """Return the gRPC stub for :meth:`VideoIntelligenceServiceClient.annotate_video`. - - Performs asynchronous video annotation. Progress and results can be - retrieved through the ``google.longrunning.Operations`` interface. - ``Operation.metadata`` contains ``AnnotateVideoProgress`` (progress). - ``Operation.response`` contains ``AnnotateVideoResponse`` (results). - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["video_intelligence_service_stub"].AnnotateVideo diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/gapic/video_intelligence_service_client.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/gapic/video_intelligence_service_client.py deleted file mode 100644 index 8d8700060615..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/gapic/video_intelligence_service_client.py +++ /dev/null @@ -1,309 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Accesses the google.cloud.videointelligence.v1p1beta1 VideoIntelligenceService API.""" - -import pkg_resources -import warnings - -from google.oauth2 import service_account -import google.api_core.client_options -import google.api_core.gapic_v1.client_info -import google.api_core.gapic_v1.config -import google.api_core.gapic_v1.method -import google.api_core.grpc_helpers -import google.api_core.operation -import google.api_core.operations_v1 -import grpc - -from google.cloud.videointelligence_v1p1beta1.gapic import enums -from google.cloud.videointelligence_v1p1beta1.gapic import ( - video_intelligence_service_client_config, -) -from google.cloud.videointelligence_v1p1beta1.gapic.transports import ( - video_intelligence_service_grpc_transport, -) -from google.cloud.videointelligence_v1p1beta1.proto import video_intelligence_pb2 -from google.cloud.videointelligence_v1p1beta1.proto import video_intelligence_pb2_grpc -from google.longrunning import operations_pb2 - - -_GAPIC_LIBRARY_VERSION = pkg_resources.get_distribution( - "google-cloud-videointelligence", -).version - - -class VideoIntelligenceServiceClient(object): - """Service that implements Google Cloud Video Intelligence API.""" - - SERVICE_ADDRESS = "videointelligence.googleapis.com:443" - """The default address of the service.""" - - # The name of the interface for this client. This is the key used to - # find the method configuration in the client_config dictionary. - _INTERFACE_NAME = ( - "google.cloud.videointelligence.v1p1beta1.VideoIntelligenceService" - ) - - @classmethod - def from_service_account_file(cls, filename, *args, **kwargs): - """Creates an instance of this client using the provided credentials - file. - - Args: - filename (str): The path to the service account private key json - file. - args: Additional arguments to pass to the constructor. - kwargs: Additional arguments to pass to the constructor. - - Returns: - VideoIntelligenceServiceClient: The constructed client. - """ - credentials = service_account.Credentials.from_service_account_file(filename) - kwargs["credentials"] = credentials - return cls(*args, **kwargs) - - from_service_account_json = from_service_account_file - - def __init__( - self, - transport=None, - channel=None, - credentials=None, - client_config=None, - client_info=None, - client_options=None, - ): - """Constructor. - - Args: - transport (Union[~.VideoIntelligenceServiceGrpcTransport, - Callable[[~.Credentials, type], ~.VideoIntelligenceServiceGrpcTransport]): A transport - instance, responsible for actually making the API calls. - The default transport uses the gRPC protocol. - This argument may also be a callable which returns a - transport instance. Callables will be sent the credentials - as the first argument and the default transport class as - the second argument. - channel (grpc.Channel): DEPRECATED. A ``Channel`` instance - through which to make calls. This argument is mutually exclusive - with ``credentials``; providing both will raise an exception. - credentials (google.auth.credentials.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If none - are specified, the client will attempt to ascertain the - credentials from the environment. - This argument is mutually exclusive with providing a - transport instance to ``transport``; doing so will raise - an exception. - client_config (dict): DEPRECATED. A dictionary of call options for - each method. If not specified, the default configuration is used. - client_info (google.api_core.gapic_v1.client_info.ClientInfo): - The client info used to send a user-agent string along with - API requests. If ``None``, then default info will be used. - Generally, you only need to set this if you're developing - your own client library. - client_options (Union[dict, google.api_core.client_options.ClientOptions]): - Client options used to set user options on the client. API Endpoint - should be set through client_options. - """ - # Raise deprecation warnings for things we want to go away. - if client_config is not None: - warnings.warn( - "The `client_config` argument is deprecated.", - PendingDeprecationWarning, - stacklevel=2, - ) - else: - client_config = video_intelligence_service_client_config.config - - if channel: - warnings.warn( - "The `channel` argument is deprecated; use " "`transport` instead.", - PendingDeprecationWarning, - stacklevel=2, - ) - - api_endpoint = self.SERVICE_ADDRESS - if client_options: - if type(client_options) == dict: - client_options = google.api_core.client_options.from_dict( - client_options - ) - if client_options.api_endpoint: - api_endpoint = client_options.api_endpoint - - # Instantiate the transport. - # The transport is responsible for handling serialization and - # deserialization and actually sending data to the service. - if transport: - if callable(transport): - self.transport = transport( - credentials=credentials, - default_class=video_intelligence_service_grpc_transport.VideoIntelligenceServiceGrpcTransport, - address=api_endpoint, - ) - else: - if credentials: - raise ValueError( - "Received both a transport instance and " - "credentials; these are mutually exclusive." - ) - self.transport = transport - else: - self.transport = video_intelligence_service_grpc_transport.VideoIntelligenceServiceGrpcTransport( - address=api_endpoint, channel=channel, credentials=credentials, - ) - - if client_info is None: - client_info = google.api_core.gapic_v1.client_info.ClientInfo( - gapic_version=_GAPIC_LIBRARY_VERSION, - ) - else: - client_info.gapic_version = _GAPIC_LIBRARY_VERSION - self._client_info = client_info - - # Parse out the default settings for retry and timeout for each RPC - # from the client configuration. - # (Ordinarily, these are the defaults specified in the `*_config.py` - # file next to this one.) - self._method_configs = google.api_core.gapic_v1.config.parse_method_configs( - client_config["interfaces"][self._INTERFACE_NAME], - ) - - # Save a dictionary of cached API call functions. - # These are the actual callables which invoke the proper - # transport methods, wrapped with `wrap_method` to add retry, - # timeout, and the like. - self._inner_api_calls = {} - - # Service calls - def annotate_video( - self, - features, - input_uri=None, - input_content=None, - video_context=None, - output_uri=None, - location_id=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Performs asynchronous video annotation. Progress and results can be - retrieved through the ``google.longrunning.Operations`` interface. - ``Operation.metadata`` contains ``AnnotateVideoProgress`` (progress). - ``Operation.response`` contains ``AnnotateVideoResponse`` (results). - - Example: - >>> from google.cloud import videointelligence_v1p1beta1 - >>> from google.cloud.videointelligence_v1p1beta1 import enums - >>> - >>> client = videointelligence_v1p1beta1.VideoIntelligenceServiceClient() - >>> - >>> features_element = enums.Feature.LABEL_DETECTION - >>> features = [features_element] - >>> input_uri = 'gs://cloud-samples-data/video/cat.mp4' - >>> - >>> response = client.annotate_video(features, input_uri=input_uri) - >>> - >>> def callback(operation_future): - ... # Handle result. - ... result = operation_future.result() - >>> - >>> response.add_done_callback(callback) - >>> - >>> # Handle metadata. - >>> metadata = response.metadata() - - Args: - features (list[~google.cloud.videointelligence_v1p1beta1.types.Feature]): Required. Requested video annotation features. - input_uri (str): Input video location. Currently, only `Google Cloud - Storage `__ URIs are supported, which - must be specified in the following format: ``gs://bucket-id/object-id`` - (other URI formats return ``google.rpc.Code.INVALID_ARGUMENT``). For - more information, see `Request - URIs `__. A - video URI may include wildcards in ``object-id``, and thus identify - multiple videos. Supported wildcards: '*' to match 0 or more characters; - '?' to match 1 character. If unset, the input video should be embedded - in the request as ``input_content``. If set, ``input_content`` should be - unset. - input_content (bytes): The video data bytes. If unset, the input video(s) should be - specified via ``input_uri``. If set, ``input_uri`` should be unset. - video_context (Union[dict, ~google.cloud.videointelligence_v1p1beta1.types.VideoContext]): Additional video context and/or feature-specific parameters. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.videointelligence_v1p1beta1.types.VideoContext` - output_uri (str): Optional. Location where the output (in JSON format) should be - stored. Currently, only `Google Cloud - Storage `__ URIs are supported, which - must be specified in the following format: ``gs://bucket-id/object-id`` - (other URI formats return ``google.rpc.Code.INVALID_ARGUMENT``). For - more information, see `Request - URIs `__. - location_id (str): Optional. Cloud region where annotation should take place. Supported - cloud regions: ``us-east1``, ``us-west1``, ``europe-west1``, - ``asia-east1``. If no region is specified, a region will be determined - based on video file location. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.videointelligence_v1p1beta1.types._OperationFuture` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "annotate_video" not in self._inner_api_calls: - self._inner_api_calls[ - "annotate_video" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.annotate_video, - default_retry=self._method_configs["AnnotateVideo"].retry, - default_timeout=self._method_configs["AnnotateVideo"].timeout, - client_info=self._client_info, - ) - - request = video_intelligence_pb2.AnnotateVideoRequest( - features=features, - input_uri=input_uri, - input_content=input_content, - video_context=video_context, - output_uri=output_uri, - location_id=location_id, - ) - operation = self._inner_api_calls["annotate_video"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - return google.api_core.operation.from_gapic( - operation, - self.transport._operations_client, - video_intelligence_pb2.AnnotateVideoResponse, - metadata_type=video_intelligence_pb2.AnnotateVideoProgress, - ) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/gapic/video_intelligence_service_client_config.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/gapic/video_intelligence_service_client_config.py deleted file mode 100644 index 03bdba6a8e4c..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/gapic/video_intelligence_service_client_config.py +++ /dev/null @@ -1,37 +0,0 @@ -config = { - "interfaces": { - "google.cloud.videointelligence.v1p1beta1.VideoIntelligenceService": { - "retry_codes": { - "retry_policy_1_codes": ["UNAVAILABLE", "DEADLINE_EXCEEDED"], - "no_retry_codes": [], - }, - "retry_params": { - "retry_policy_1_params": { - "initial_retry_delay_millis": 1000, - "retry_delay_multiplier": 2.5, - "max_retry_delay_millis": 120000, - "initial_rpc_timeout_millis": 600000, - "rpc_timeout_multiplier": 1.0, - "max_rpc_timeout_millis": 600000, - "total_timeout_millis": 600000, - }, - "no_retry_params": { - "initial_retry_delay_millis": 0, - "retry_delay_multiplier": 0.0, - "max_retry_delay_millis": 0, - "initial_rpc_timeout_millis": 0, - "rpc_timeout_multiplier": 1.0, - "max_rpc_timeout_millis": 0, - "total_timeout_millis": 0, - }, - }, - "methods": { - "AnnotateVideo": { - "timeout_millis": 600000, - "retry_codes_name": "retry_policy_1_codes", - "retry_params_name": "retry_policy_1_params", - } - }, - } - } -} diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/proto/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/proto/__init__.py deleted file mode 100644 index e69de29bb2d1..000000000000 diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/proto/video_intelligence.proto b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/proto/video_intelligence.proto deleted file mode 100644 index a54bddd07d0e..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/proto/video_intelligence.proto +++ /dev/null @@ -1,444 +0,0 @@ -// Copyright 2019 Google LLC. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. -// - -syntax = "proto3"; - -package google.cloud.videointelligence.v1p1beta1; - -import "google/api/annotations.proto"; -import "google/api/client.proto"; -import "google/api/field_behavior.proto"; -import "google/longrunning/operations.proto"; -import "google/protobuf/duration.proto"; -import "google/protobuf/timestamp.proto"; -import "google/rpc/status.proto"; - -option csharp_namespace = "Google.Cloud.VideoIntelligence.V1P1Beta1"; -option go_package = "google.golang.org/genproto/googleapis/cloud/videointelligence/v1p1beta1;videointelligence"; -option java_multiple_files = true; -option java_outer_classname = "VideoIntelligenceServiceProto"; -option java_package = "com.google.cloud.videointelligence.v1p1beta1"; -option php_namespace = "Google\\Cloud\\VideoIntelligence\\V1p1beta1"; -option ruby_package = "Google::Cloud::VideoIntelligence::V1p1beta1"; - -// Service that implements Google Cloud Video Intelligence API. -service VideoIntelligenceService { - option (google.api.default_host) = "videointelligence.googleapis.com"; - option (google.api.oauth_scopes) = "https://www.googleapis.com/auth/cloud-platform"; - - // Performs asynchronous video annotation. Progress and results can be - // retrieved through the `google.longrunning.Operations` interface. - // `Operation.metadata` contains `AnnotateVideoProgress` (progress). - // `Operation.response` contains `AnnotateVideoResponse` (results). - rpc AnnotateVideo(AnnotateVideoRequest) returns (google.longrunning.Operation) { - option (google.api.http) = { - post: "/v1p1beta1/videos:annotate" - body: "*" - }; - option (google.api.method_signature) = "input_uri,features"; - option (google.longrunning.operation_info) = { - response_type: "AnnotateVideoResponse" - metadata_type: "AnnotateVideoProgress" - }; - } -} - -// Video annotation request. -message AnnotateVideoRequest { - // Input video location. Currently, only - // [Google Cloud Storage](https://cloud.google.com/storage/) URIs are - // supported, which must be specified in the following format: - // `gs://bucket-id/object-id` (other URI formats return - // [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For more information, see - // [Request URIs](https://cloud.google.com/storage/docs/request-endpoints). - // A video URI may include wildcards in `object-id`, and thus identify - // multiple videos. Supported wildcards: '*' to match 0 or more characters; - // '?' to match 1 character. If unset, the input video should be embedded - // in the request as `input_content`. If set, `input_content` should be unset. - string input_uri = 1; - - // The video data bytes. - // If unset, the input video(s) should be specified via `input_uri`. - // If set, `input_uri` should be unset. - bytes input_content = 6; - - // Required. Requested video annotation features. - repeated Feature features = 2 [(google.api.field_behavior) = REQUIRED]; - - // Additional video context and/or feature-specific parameters. - VideoContext video_context = 3; - - // Optional. Location where the output (in JSON format) should be stored. - // Currently, only [Google Cloud Storage](https://cloud.google.com/storage/) - // URIs are supported, which must be specified in the following format: - // `gs://bucket-id/object-id` (other URI formats return - // [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For more information, see - // [Request URIs](https://cloud.google.com/storage/docs/request-endpoints). - string output_uri = 4 [(google.api.field_behavior) = OPTIONAL]; - - // Optional. Cloud region where annotation should take place. Supported cloud - // regions: `us-east1`, `us-west1`, `europe-west1`, `asia-east1`. If no region - // is specified, a region will be determined based on video file location. - string location_id = 5 [(google.api.field_behavior) = OPTIONAL]; -} - -// Video context and/or feature-specific parameters. -message VideoContext { - // Video segments to annotate. The segments may overlap and are not required - // to be contiguous or span the whole video. If unspecified, each video is - // treated as a single segment. - repeated VideoSegment segments = 1; - - // Config for LABEL_DETECTION. - LabelDetectionConfig label_detection_config = 2; - - // Config for SHOT_CHANGE_DETECTION. - ShotChangeDetectionConfig shot_change_detection_config = 3; - - // Config for EXPLICIT_CONTENT_DETECTION. - ExplicitContentDetectionConfig explicit_content_detection_config = 4; - - // Config for SPEECH_TRANSCRIPTION. - SpeechTranscriptionConfig speech_transcription_config = 6; -} - -// Config for LABEL_DETECTION. -message LabelDetectionConfig { - // What labels should be detected with LABEL_DETECTION, in addition to - // video-level labels or segment-level labels. - // If unspecified, defaults to `SHOT_MODE`. - LabelDetectionMode label_detection_mode = 1; - - // Whether the video has been shot from a stationary (i.e. non-moving) camera. - // When set to true, might improve detection accuracy for moving objects. - // Should be used with `SHOT_AND_FRAME_MODE` enabled. - bool stationary_camera = 2; - - // Model to use for label detection. - // Supported values: "builtin/stable" (the default if unset) and - // "builtin/latest". - string model = 3; -} - -// Config for SHOT_CHANGE_DETECTION. -message ShotChangeDetectionConfig { - // Model to use for shot change detection. - // Supported values: "builtin/stable" (the default if unset) and - // "builtin/latest". - string model = 1; -} - -// Config for EXPLICIT_CONTENT_DETECTION. -message ExplicitContentDetectionConfig { - // Model to use for explicit content detection. - // Supported values: "builtin/stable" (the default if unset) and - // "builtin/latest". - string model = 1; -} - -// Video segment. -message VideoSegment { - // Time-offset, relative to the beginning of the video, - // corresponding to the start of the segment (inclusive). - google.protobuf.Duration start_time_offset = 1; - - // Time-offset, relative to the beginning of the video, - // corresponding to the end of the segment (inclusive). - google.protobuf.Duration end_time_offset = 2; -} - -// Video segment level annotation results for label detection. -message LabelSegment { - // Video segment where a label was detected. - VideoSegment segment = 1; - - // Confidence that the label is accurate. Range: [0, 1]. - float confidence = 2; -} - -// Video frame level annotation results for label detection. -message LabelFrame { - // Time-offset, relative to the beginning of the video, corresponding to the - // video frame for this location. - google.protobuf.Duration time_offset = 1; - - // Confidence that the label is accurate. Range: [0, 1]. - float confidence = 2; -} - -// Detected entity from video analysis. -message Entity { - // Opaque entity ID. Some IDs may be available in - // [Google Knowledge Graph Search - // API](https://developers.google.com/knowledge-graph/). - string entity_id = 1; - - // Textual description, e.g. `Fixed-gear bicycle`. - string description = 2; - - // Language code for `description` in BCP-47 format. - string language_code = 3; -} - -// Label annotation. -message LabelAnnotation { - // Detected entity. - Entity entity = 1; - - // Common categories for the detected entity. - // E.g. when the label is `Terrier` the category is likely `dog`. And in some - // cases there might be more than one categories e.g. `Terrier` could also be - // a `pet`. - repeated Entity category_entities = 2; - - // All video segments where a label was detected. - repeated LabelSegment segments = 3; - - // All video frames where a label was detected. - repeated LabelFrame frames = 4; -} - -// Video frame level annotation results for explicit content. -message ExplicitContentFrame { - // Time-offset, relative to the beginning of the video, corresponding to the - // video frame for this location. - google.protobuf.Duration time_offset = 1; - - // Likelihood of the pornography content.. - Likelihood pornography_likelihood = 2; -} - -// Explicit content annotation (based on per-frame visual signals only). -// If no explicit content has been detected in a frame, no annotations are -// present for that frame. -message ExplicitContentAnnotation { - // All video frames where explicit content was detected. - repeated ExplicitContentFrame frames = 1; -} - -// Annotation results for a single video. -message VideoAnnotationResults { - // Output only. Video file location in - // [Google Cloud Storage](https://cloud.google.com/storage/). - string input_uri = 1; - - // Label annotations on video level or user specified segment level. - // There is exactly one element for each unique label. - repeated LabelAnnotation segment_label_annotations = 2; - - // Label annotations on shot level. - // There is exactly one element for each unique label. - repeated LabelAnnotation shot_label_annotations = 3; - - // Label annotations on frame level. - // There is exactly one element for each unique label. - repeated LabelAnnotation frame_label_annotations = 4; - - // Shot annotations. Each shot is represented as a video segment. - repeated VideoSegment shot_annotations = 6; - - // Explicit content annotation. - ExplicitContentAnnotation explicit_annotation = 7; - - // Speech transcription. - repeated SpeechTranscription speech_transcriptions = 11; - - // Output only. If set, indicates an error. Note that for a single - // `AnnotateVideoRequest` some videos may succeed and some may fail. - google.rpc.Status error = 9; -} - -// Video annotation response. Included in the `response` -// field of the `Operation` returned by the `GetOperation` -// call of the `google::longrunning::Operations` service. -message AnnotateVideoResponse { - // Annotation results for all videos specified in `AnnotateVideoRequest`. - repeated VideoAnnotationResults annotation_results = 1; -} - -// Annotation progress for a single video. -message VideoAnnotationProgress { - // Output only. Video file location in - // [Google Cloud Storage](https://cloud.google.com/storage/). - string input_uri = 1; - - // Output only. Approximate percentage processed thus far. Guaranteed to be - // 100 when fully processed. - int32 progress_percent = 2; - - // Output only. Time when the request was received. - google.protobuf.Timestamp start_time = 3; - - // Output only. Time of the most recent update. - google.protobuf.Timestamp update_time = 4; -} - -// Video annotation progress. Included in the `metadata` -// field of the `Operation` returned by the `GetOperation` -// call of the `google::longrunning::Operations` service. -message AnnotateVideoProgress { - // Progress metadata for all videos specified in `AnnotateVideoRequest`. - repeated VideoAnnotationProgress annotation_progress = 1; -} - -// Config for SPEECH_TRANSCRIPTION. -message SpeechTranscriptionConfig { - // Required. *Required* The language of the supplied audio as a - // [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag. - // Example: "en-US". - // See [Language Support](https://cloud.google.com/speech/docs/languages) - // for a list of the currently supported language codes. - string language_code = 1 [(google.api.field_behavior) = REQUIRED]; - - // Optional. Maximum number of recognition hypotheses to be returned. - // Specifically, the maximum number of `SpeechRecognitionAlternative` messages - // within each `SpeechTranscription`. The server may return fewer than - // `max_alternatives`. Valid values are `0`-`30`. A value of `0` or `1` will - // return a maximum of one. If omitted, will return a maximum of one. - int32 max_alternatives = 2 [(google.api.field_behavior) = OPTIONAL]; - - // Optional. If set to `true`, the server will attempt to filter out - // profanities, replacing all but the initial character in each filtered word - // with asterisks, e.g. "f***". If set to `false` or omitted, profanities - // won't be filtered out. - bool filter_profanity = 3 [(google.api.field_behavior) = OPTIONAL]; - - // Optional. A means to provide context to assist the speech recognition. - repeated SpeechContext speech_contexts = 4 [(google.api.field_behavior) = OPTIONAL]; - - // Optional. If 'true', adds punctuation to recognition result hypotheses. - // This feature is only available in select languages. Setting this for - // requests in other languages has no effect at all. The default 'false' value - // does not add punctuation to result hypotheses. NOTE: "This is currently - // offered as an experimental service, complimentary to all users. In the - // future this may be exclusively available as a premium feature." - bool enable_automatic_punctuation = 5 [(google.api.field_behavior) = OPTIONAL]; - - // Optional. For file formats, such as MXF or MKV, supporting multiple audio - // tracks, specify up to two tracks. Default: track 0. - repeated int32 audio_tracks = 6 [(google.api.field_behavior) = OPTIONAL]; -} - -// Provides "hints" to the speech recognizer to favor specific words and phrases -// in the results. -message SpeechContext { - // Optional. A list of strings containing words and phrases "hints" so that - // the speech recognition is more likely to recognize them. This can be used - // to improve the accuracy for specific words and phrases, for example, if - // specific commands are typically spoken by the user. This can also be used - // to add additional words to the vocabulary of the recognizer. See - // [usage limits](https://cloud.google.com/speech/limits#content). - repeated string phrases = 1 [(google.api.field_behavior) = OPTIONAL]; -} - -// A speech recognition result corresponding to a portion of the audio. -message SpeechTranscription { - // May contain one or more recognition hypotheses (up to the maximum specified - // in `max_alternatives`). These alternatives are ordered in terms of - // accuracy, with the top (first) alternative being the most probable, as - // ranked by the recognizer. - repeated SpeechRecognitionAlternative alternatives = 1; -} - -// Alternative hypotheses (a.k.a. n-best list). -message SpeechRecognitionAlternative { - // Output only. Transcript text representing the words that the user spoke. - string transcript = 1; - - // Output only. The confidence estimate between 0.0 and 1.0. A higher number - // indicates an estimated greater likelihood that the recognized words are - // correct. This field is set only for the top alternative. - // This field is not guaranteed to be accurate and users should not rely on it - // to be always provided. - // The default of 0.0 is a sentinel value indicating `confidence` was not set. - float confidence = 2 [(google.api.field_behavior) = OUTPUT_ONLY]; - - // Output only. A list of word-specific information for each recognized word. - repeated WordInfo words = 3; -} - -// Word-specific information for recognized words. Word information is only -// included in the response when certain request parameters are set, such -// as `enable_word_time_offsets`. -message WordInfo { - // Output only. Time offset relative to the beginning of the audio, and - // corresponding to the start of the spoken word. This field is only set if - // `enable_word_time_offsets=true` and only in the top hypothesis. This is an - // experimental feature and the accuracy of the time offset can vary. - google.protobuf.Duration start_time = 1; - - // Output only. Time offset relative to the beginning of the audio, and - // corresponding to the end of the spoken word. This field is only set if - // `enable_word_time_offsets=true` and only in the top hypothesis. This is an - // experimental feature and the accuracy of the time offset can vary. - google.protobuf.Duration end_time = 2; - - // Output only. The word corresponding to this set of information. - string word = 3; -} - -// Video annotation feature. -enum Feature { - // Unspecified. - FEATURE_UNSPECIFIED = 0; - - // Label detection. Detect objects, such as dog or flower. - LABEL_DETECTION = 1; - - // Shot change detection. - SHOT_CHANGE_DETECTION = 2; - - // Explicit content detection. - EXPLICIT_CONTENT_DETECTION = 3; - - // Speech transcription. - SPEECH_TRANSCRIPTION = 6; -} - -// Label detection mode. -enum LabelDetectionMode { - // Unspecified. - LABEL_DETECTION_MODE_UNSPECIFIED = 0; - - // Detect shot-level labels. - SHOT_MODE = 1; - - // Detect frame-level labels. - FRAME_MODE = 2; - - // Detect both shot-level and frame-level labels. - SHOT_AND_FRAME_MODE = 3; -} - -// Bucketized representation of likelihood. -enum Likelihood { - // Unspecified likelihood. - LIKELIHOOD_UNSPECIFIED = 0; - - // Very unlikely. - VERY_UNLIKELY = 1; - - // Unlikely. - UNLIKELY = 2; - - // Possible. - POSSIBLE = 3; - - // Likely. - LIKELY = 4; - - // Very likely. - VERY_LIKELY = 5; -} diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/proto/video_intelligence_pb2.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/proto/video_intelligence_pb2.py deleted file mode 100644 index c5de794cce20..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/proto/video_intelligence_pb2.py +++ /dev/null @@ -1,2559 +0,0 @@ -# -*- coding: utf-8 -*- -# Generated by the protocol buffer compiler. DO NOT EDIT! -# source: google/cloud/videointelligence_v1p1beta1/proto/video_intelligence.proto -"""Generated protocol buffer code.""" -from google.protobuf.internal import enum_type_wrapper -from google.protobuf import descriptor as _descriptor -from google.protobuf import message as _message -from google.protobuf import reflection as _reflection -from google.protobuf import symbol_database as _symbol_database - -# @@protoc_insertion_point(imports) - -_sym_db = _symbol_database.Default() - - -from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2 -from google.api import client_pb2 as google_dot_api_dot_client__pb2 -from google.api import field_behavior_pb2 as google_dot_api_dot_field__behavior__pb2 -from google.longrunning import ( - operations_pb2 as google_dot_longrunning_dot_operations__pb2, -) -from google.protobuf import duration_pb2 as google_dot_protobuf_dot_duration__pb2 -from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2 -from google.rpc import status_pb2 as google_dot_rpc_dot_status__pb2 - - -DESCRIPTOR = _descriptor.FileDescriptor( - name="google/cloud/videointelligence_v1p1beta1/proto/video_intelligence.proto", - package="google.cloud.videointelligence.v1p1beta1", - syntax="proto3", - serialized_options=b"\n,com.google.cloud.videointelligence.v1p1beta1B\035VideoIntelligenceServiceProtoP\001ZYgoogle.golang.org/genproto/googleapis/cloud/videointelligence/v1p1beta1;videointelligence\252\002(Google.Cloud.VideoIntelligence.V1P1Beta1\312\002(Google\\Cloud\\VideoIntelligence\\V1p1beta1\352\002+Google::Cloud::VideoIntelligence::V1p1beta1", - create_key=_descriptor._internal_create_key, - serialized_pb=b'\nGgoogle/cloud/videointelligence_v1p1beta1/proto/video_intelligence.proto\x12(google.cloud.videointelligence.v1p1beta1\x1a\x1cgoogle/api/annotations.proto\x1a\x17google/api/client.proto\x1a\x1fgoogle/api/field_behavior.proto\x1a#google/longrunning/operations.proto\x1a\x1egoogle/protobuf/duration.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x17google/rpc/status.proto"\x8c\x02\n\x14\x41nnotateVideoRequest\x12\x11\n\tinput_uri\x18\x01 \x01(\t\x12\x15\n\rinput_content\x18\x06 \x01(\x0c\x12H\n\x08\x66\x65\x61tures\x18\x02 \x03(\x0e\x32\x31.google.cloud.videointelligence.v1p1beta1.FeatureB\x03\xe0\x41\x02\x12M\n\rvideo_context\x18\x03 \x01(\x0b\x32\x36.google.cloud.videointelligence.v1p1beta1.VideoContext\x12\x17\n\noutput_uri\x18\x04 \x01(\tB\x03\xe0\x41\x01\x12\x18\n\x0blocation_id\x18\x05 \x01(\tB\x03\xe0\x41\x01"\x82\x04\n\x0cVideoContext\x12H\n\x08segments\x18\x01 \x03(\x0b\x32\x36.google.cloud.videointelligence.v1p1beta1.VideoSegment\x12^\n\x16label_detection_config\x18\x02 \x01(\x0b\x32>.google.cloud.videointelligence.v1p1beta1.LabelDetectionConfig\x12i\n\x1cshot_change_detection_config\x18\x03 \x01(\x0b\x32\x43.google.cloud.videointelligence.v1p1beta1.ShotChangeDetectionConfig\x12s\n!explicit_content_detection_config\x18\x04 \x01(\x0b\x32H.google.cloud.videointelligence.v1p1beta1.ExplicitContentDetectionConfig\x12h\n\x1bspeech_transcription_config\x18\x06 \x01(\x0b\x32\x43.google.cloud.videointelligence.v1p1beta1.SpeechTranscriptionConfig"\x9c\x01\n\x14LabelDetectionConfig\x12Z\n\x14label_detection_mode\x18\x01 \x01(\x0e\x32<.google.cloud.videointelligence.v1p1beta1.LabelDetectionMode\x12\x19\n\x11stationary_camera\x18\x02 \x01(\x08\x12\r\n\x05model\x18\x03 \x01(\t"*\n\x19ShotChangeDetectionConfig\x12\r\n\x05model\x18\x01 \x01(\t"/\n\x1e\x45xplicitContentDetectionConfig\x12\r\n\x05model\x18\x01 \x01(\t"x\n\x0cVideoSegment\x12\x34\n\x11start_time_offset\x18\x01 \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x32\n\x0f\x65nd_time_offset\x18\x02 \x01(\x0b\x32\x19.google.protobuf.Duration"k\n\x0cLabelSegment\x12G\n\x07segment\x18\x01 \x01(\x0b\x32\x36.google.cloud.videointelligence.v1p1beta1.VideoSegment\x12\x12\n\nconfidence\x18\x02 \x01(\x02"P\n\nLabelFrame\x12.\n\x0btime_offset\x18\x01 \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x12\n\nconfidence\x18\x02 \x01(\x02"G\n\x06\x45ntity\x12\x11\n\tentity_id\x18\x01 \x01(\t\x12\x13\n\x0b\x64\x65scription\x18\x02 \x01(\t\x12\x15\n\rlanguage_code\x18\x03 \x01(\t"\xb0\x02\n\x0fLabelAnnotation\x12@\n\x06\x65ntity\x18\x01 \x01(\x0b\x32\x30.google.cloud.videointelligence.v1p1beta1.Entity\x12K\n\x11\x63\x61tegory_entities\x18\x02 \x03(\x0b\x32\x30.google.cloud.videointelligence.v1p1beta1.Entity\x12H\n\x08segments\x18\x03 \x03(\x0b\x32\x36.google.cloud.videointelligence.v1p1beta1.LabelSegment\x12\x44\n\x06\x66rames\x18\x04 \x03(\x0b\x32\x34.google.cloud.videointelligence.v1p1beta1.LabelFrame"\x9c\x01\n\x14\x45xplicitContentFrame\x12.\n\x0btime_offset\x18\x01 \x01(\x0b\x32\x19.google.protobuf.Duration\x12T\n\x16pornography_likelihood\x18\x02 \x01(\x0e\x32\x34.google.cloud.videointelligence.v1p1beta1.Likelihood"k\n\x19\x45xplicitContentAnnotation\x12N\n\x06\x66rames\x18\x01 \x03(\x0b\x32>.google.cloud.videointelligence.v1p1beta1.ExplicitContentFrame"\xf5\x04\n\x16VideoAnnotationResults\x12\x11\n\tinput_uri\x18\x01 \x01(\t\x12\\\n\x19segment_label_annotations\x18\x02 \x03(\x0b\x32\x39.google.cloud.videointelligence.v1p1beta1.LabelAnnotation\x12Y\n\x16shot_label_annotations\x18\x03 \x03(\x0b\x32\x39.google.cloud.videointelligence.v1p1beta1.LabelAnnotation\x12Z\n\x17\x66rame_label_annotations\x18\x04 \x03(\x0b\x32\x39.google.cloud.videointelligence.v1p1beta1.LabelAnnotation\x12P\n\x10shot_annotations\x18\x06 \x03(\x0b\x32\x36.google.cloud.videointelligence.v1p1beta1.VideoSegment\x12`\n\x13\x65xplicit_annotation\x18\x07 \x01(\x0b\x32\x43.google.cloud.videointelligence.v1p1beta1.ExplicitContentAnnotation\x12\\\n\x15speech_transcriptions\x18\x0b \x03(\x0b\x32=.google.cloud.videointelligence.v1p1beta1.SpeechTranscription\x12!\n\x05\x65rror\x18\t \x01(\x0b\x32\x12.google.rpc.Status"u\n\x15\x41nnotateVideoResponse\x12\\\n\x12\x61nnotation_results\x18\x01 \x03(\x0b\x32@.google.cloud.videointelligence.v1p1beta1.VideoAnnotationResults"\xa7\x01\n\x17VideoAnnotationProgress\x12\x11\n\tinput_uri\x18\x01 \x01(\t\x12\x18\n\x10progress_percent\x18\x02 \x01(\x05\x12.\n\nstart_time\x18\x03 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12/\n\x0bupdate_time\x18\x04 \x01(\x0b\x32\x1a.google.protobuf.Timestamp"w\n\x15\x41nnotateVideoProgress\x12^\n\x13\x61nnotation_progress\x18\x01 \x03(\x0b\x32\x41.google.cloud.videointelligence.v1p1beta1.VideoAnnotationProgress"\x92\x02\n\x19SpeechTranscriptionConfig\x12\x1a\n\rlanguage_code\x18\x01 \x01(\tB\x03\xe0\x41\x02\x12\x1d\n\x10max_alternatives\x18\x02 \x01(\x05\x42\x03\xe0\x41\x01\x12\x1d\n\x10\x66ilter_profanity\x18\x03 \x01(\x08\x42\x03\xe0\x41\x01\x12U\n\x0fspeech_contexts\x18\x04 \x03(\x0b\x32\x37.google.cloud.videointelligence.v1p1beta1.SpeechContextB\x03\xe0\x41\x01\x12)\n\x1c\x65nable_automatic_punctuation\x18\x05 \x01(\x08\x42\x03\xe0\x41\x01\x12\x19\n\x0c\x61udio_tracks\x18\x06 \x03(\x05\x42\x03\xe0\x41\x01"%\n\rSpeechContext\x12\x14\n\x07phrases\x18\x01 \x03(\tB\x03\xe0\x41\x01"s\n\x13SpeechTranscription\x12\\\n\x0c\x61lternatives\x18\x01 \x03(\x0b\x32\x46.google.cloud.videointelligence.v1p1beta1.SpeechRecognitionAlternative"\x8e\x01\n\x1cSpeechRecognitionAlternative\x12\x12\n\ntranscript\x18\x01 \x01(\t\x12\x17\n\nconfidence\x18\x02 \x01(\x02\x42\x03\xe0\x41\x03\x12\x41\n\x05words\x18\x03 \x03(\x0b\x32\x32.google.cloud.videointelligence.v1p1beta1.WordInfo"t\n\x08WordInfo\x12-\n\nstart_time\x18\x01 \x01(\x0b\x32\x19.google.protobuf.Duration\x12+\n\x08\x65nd_time\x18\x02 \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x0c\n\x04word\x18\x03 \x01(\t*\x8c\x01\n\x07\x46\x65\x61ture\x12\x17\n\x13\x46\x45\x41TURE_UNSPECIFIED\x10\x00\x12\x13\n\x0fLABEL_DETECTION\x10\x01\x12\x19\n\x15SHOT_CHANGE_DETECTION\x10\x02\x12\x1e\n\x1a\x45XPLICIT_CONTENT_DETECTION\x10\x03\x12\x18\n\x14SPEECH_TRANSCRIPTION\x10\x06*r\n\x12LabelDetectionMode\x12$\n LABEL_DETECTION_MODE_UNSPECIFIED\x10\x00\x12\r\n\tSHOT_MODE\x10\x01\x12\x0e\n\nFRAME_MODE\x10\x02\x12\x17\n\x13SHOT_AND_FRAME_MODE\x10\x03*t\n\nLikelihood\x12\x1a\n\x16LIKELIHOOD_UNSPECIFIED\x10\x00\x12\x11\n\rVERY_UNLIKELY\x10\x01\x12\x0c\n\x08UNLIKELY\x10\x02\x12\x0c\n\x08POSSIBLE\x10\x03\x12\n\n\x06LIKELY\x10\x04\x12\x0f\n\x0bVERY_LIKELY\x10\x05\x32\xce\x02\n\x18VideoIntelligenceService\x12\xdb\x01\n\rAnnotateVideo\x12>.google.cloud.videointelligence.v1p1beta1.AnnotateVideoRequest\x1a\x1d.google.longrunning.Operation"k\x82\xd3\xe4\x93\x02\x1f"\x1a/v1p1beta1/videos:annotate:\x01*\xda\x41\x12input_uri,features\xca\x41.\n\x15\x41nnotateVideoResponse\x12\x15\x41nnotateVideoProgress\x1aT\xca\x41 videointelligence.googleapis.com\xd2\x41.https://www.googleapis.com/auth/cloud-platformB\xae\x02\n,com.google.cloud.videointelligence.v1p1beta1B\x1dVideoIntelligenceServiceProtoP\x01ZYgoogle.golang.org/genproto/googleapis/cloud/videointelligence/v1p1beta1;videointelligence\xaa\x02(Google.Cloud.VideoIntelligence.V1P1Beta1\xca\x02(Google\\Cloud\\VideoIntelligence\\V1p1beta1\xea\x02+Google::Cloud::VideoIntelligence::V1p1beta1b\x06proto3', - dependencies=[ - google_dot_api_dot_annotations__pb2.DESCRIPTOR, - google_dot_api_dot_client__pb2.DESCRIPTOR, - google_dot_api_dot_field__behavior__pb2.DESCRIPTOR, - google_dot_longrunning_dot_operations__pb2.DESCRIPTOR, - google_dot_protobuf_dot_duration__pb2.DESCRIPTOR, - google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR, - google_dot_rpc_dot_status__pb2.DESCRIPTOR, - ], -) - -_FEATURE = _descriptor.EnumDescriptor( - name="Feature", - full_name="google.cloud.videointelligence.v1p1beta1.Feature", - filename=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - values=[ - _descriptor.EnumValueDescriptor( - name="FEATURE_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="LABEL_DETECTION", - index=1, - number=1, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="SHOT_CHANGE_DETECTION", - index=2, - number=2, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="EXPLICIT_CONTENT_DETECTION", - index=3, - number=3, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="SPEECH_TRANSCRIPTION", - index=4, - number=6, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=4072, - serialized_end=4212, -) -_sym_db.RegisterEnumDescriptor(_FEATURE) - -Feature = enum_type_wrapper.EnumTypeWrapper(_FEATURE) -_LABELDETECTIONMODE = _descriptor.EnumDescriptor( - name="LabelDetectionMode", - full_name="google.cloud.videointelligence.v1p1beta1.LabelDetectionMode", - filename=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - values=[ - _descriptor.EnumValueDescriptor( - name="LABEL_DETECTION_MODE_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="SHOT_MODE", - index=1, - number=1, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="FRAME_MODE", - index=2, - number=2, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="SHOT_AND_FRAME_MODE", - index=3, - number=3, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=4214, - serialized_end=4328, -) -_sym_db.RegisterEnumDescriptor(_LABELDETECTIONMODE) - -LabelDetectionMode = enum_type_wrapper.EnumTypeWrapper(_LABELDETECTIONMODE) -_LIKELIHOOD = _descriptor.EnumDescriptor( - name="Likelihood", - full_name="google.cloud.videointelligence.v1p1beta1.Likelihood", - filename=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - values=[ - _descriptor.EnumValueDescriptor( - name="LIKELIHOOD_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="VERY_UNLIKELY", - index=1, - number=1, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="UNLIKELY", - index=2, - number=2, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="POSSIBLE", - index=3, - number=3, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="LIKELY", - index=4, - number=4, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="VERY_LIKELY", - index=5, - number=5, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=4330, - serialized_end=4446, -) -_sym_db.RegisterEnumDescriptor(_LIKELIHOOD) - -Likelihood = enum_type_wrapper.EnumTypeWrapper(_LIKELIHOOD) -FEATURE_UNSPECIFIED = 0 -LABEL_DETECTION = 1 -SHOT_CHANGE_DETECTION = 2 -EXPLICIT_CONTENT_DETECTION = 3 -SPEECH_TRANSCRIPTION = 6 -LABEL_DETECTION_MODE_UNSPECIFIED = 0 -SHOT_MODE = 1 -FRAME_MODE = 2 -SHOT_AND_FRAME_MODE = 3 -LIKELIHOOD_UNSPECIFIED = 0 -VERY_UNLIKELY = 1 -UNLIKELY = 2 -POSSIBLE = 3 -LIKELY = 4 -VERY_LIKELY = 5 - - -_ANNOTATEVIDEOREQUEST = _descriptor.Descriptor( - name="AnnotateVideoRequest", - full_name="google.cloud.videointelligence.v1p1beta1.AnnotateVideoRequest", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="input_uri", - full_name="google.cloud.videointelligence.v1p1beta1.AnnotateVideoRequest.input_uri", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="input_content", - full_name="google.cloud.videointelligence.v1p1beta1.AnnotateVideoRequest.input_content", - index=1, - number=6, - type=12, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"", - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="features", - full_name="google.cloud.videointelligence.v1p1beta1.AnnotateVideoRequest.features", - index=2, - number=2, - type=14, - cpp_type=8, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\002", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="video_context", - full_name="google.cloud.videointelligence.v1p1beta1.AnnotateVideoRequest.video_context", - index=3, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="output_uri", - full_name="google.cloud.videointelligence.v1p1beta1.AnnotateVideoRequest.output_uri", - index=4, - number=4, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="location_id", - full_name="google.cloud.videointelligence.v1p1beta1.AnnotateVideoRequest.location_id", - index=5, - number=5, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=333, - serialized_end=601, -) - - -_VIDEOCONTEXT = _descriptor.Descriptor( - name="VideoContext", - full_name="google.cloud.videointelligence.v1p1beta1.VideoContext", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="segments", - full_name="google.cloud.videointelligence.v1p1beta1.VideoContext.segments", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="label_detection_config", - full_name="google.cloud.videointelligence.v1p1beta1.VideoContext.label_detection_config", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="shot_change_detection_config", - full_name="google.cloud.videointelligence.v1p1beta1.VideoContext.shot_change_detection_config", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="explicit_content_detection_config", - full_name="google.cloud.videointelligence.v1p1beta1.VideoContext.explicit_content_detection_config", - index=3, - number=4, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="speech_transcription_config", - full_name="google.cloud.videointelligence.v1p1beta1.VideoContext.speech_transcription_config", - index=4, - number=6, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=604, - serialized_end=1118, -) - - -_LABELDETECTIONCONFIG = _descriptor.Descriptor( - name="LabelDetectionConfig", - full_name="google.cloud.videointelligence.v1p1beta1.LabelDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="label_detection_mode", - full_name="google.cloud.videointelligence.v1p1beta1.LabelDetectionConfig.label_detection_mode", - index=0, - number=1, - type=14, - cpp_type=8, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="stationary_camera", - full_name="google.cloud.videointelligence.v1p1beta1.LabelDetectionConfig.stationary_camera", - index=1, - number=2, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="model", - full_name="google.cloud.videointelligence.v1p1beta1.LabelDetectionConfig.model", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1121, - serialized_end=1277, -) - - -_SHOTCHANGEDETECTIONCONFIG = _descriptor.Descriptor( - name="ShotChangeDetectionConfig", - full_name="google.cloud.videointelligence.v1p1beta1.ShotChangeDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="model", - full_name="google.cloud.videointelligence.v1p1beta1.ShotChangeDetectionConfig.model", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1279, - serialized_end=1321, -) - - -_EXPLICITCONTENTDETECTIONCONFIG = _descriptor.Descriptor( - name="ExplicitContentDetectionConfig", - full_name="google.cloud.videointelligence.v1p1beta1.ExplicitContentDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="model", - full_name="google.cloud.videointelligence.v1p1beta1.ExplicitContentDetectionConfig.model", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1323, - serialized_end=1370, -) - - -_VIDEOSEGMENT = _descriptor.Descriptor( - name="VideoSegment", - full_name="google.cloud.videointelligence.v1p1beta1.VideoSegment", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="start_time_offset", - full_name="google.cloud.videointelligence.v1p1beta1.VideoSegment.start_time_offset", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="end_time_offset", - full_name="google.cloud.videointelligence.v1p1beta1.VideoSegment.end_time_offset", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1372, - serialized_end=1492, -) - - -_LABELSEGMENT = _descriptor.Descriptor( - name="LabelSegment", - full_name="google.cloud.videointelligence.v1p1beta1.LabelSegment", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="segment", - full_name="google.cloud.videointelligence.v1p1beta1.LabelSegment.segment", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1p1beta1.LabelSegment.confidence", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1494, - serialized_end=1601, -) - - -_LABELFRAME = _descriptor.Descriptor( - name="LabelFrame", - full_name="google.cloud.videointelligence.v1p1beta1.LabelFrame", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="time_offset", - full_name="google.cloud.videointelligence.v1p1beta1.LabelFrame.time_offset", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1p1beta1.LabelFrame.confidence", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1603, - serialized_end=1683, -) - - -_ENTITY = _descriptor.Descriptor( - name="Entity", - full_name="google.cloud.videointelligence.v1p1beta1.Entity", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="entity_id", - full_name="google.cloud.videointelligence.v1p1beta1.Entity.entity_id", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="description", - full_name="google.cloud.videointelligence.v1p1beta1.Entity.description", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="language_code", - full_name="google.cloud.videointelligence.v1p1beta1.Entity.language_code", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1685, - serialized_end=1756, -) - - -_LABELANNOTATION = _descriptor.Descriptor( - name="LabelAnnotation", - full_name="google.cloud.videointelligence.v1p1beta1.LabelAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="entity", - full_name="google.cloud.videointelligence.v1p1beta1.LabelAnnotation.entity", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="category_entities", - full_name="google.cloud.videointelligence.v1p1beta1.LabelAnnotation.category_entities", - index=1, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segments", - full_name="google.cloud.videointelligence.v1p1beta1.LabelAnnotation.segments", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="frames", - full_name="google.cloud.videointelligence.v1p1beta1.LabelAnnotation.frames", - index=3, - number=4, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1759, - serialized_end=2063, -) - - -_EXPLICITCONTENTFRAME = _descriptor.Descriptor( - name="ExplicitContentFrame", - full_name="google.cloud.videointelligence.v1p1beta1.ExplicitContentFrame", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="time_offset", - full_name="google.cloud.videointelligence.v1p1beta1.ExplicitContentFrame.time_offset", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="pornography_likelihood", - full_name="google.cloud.videointelligence.v1p1beta1.ExplicitContentFrame.pornography_likelihood", - index=1, - number=2, - type=14, - cpp_type=8, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2066, - serialized_end=2222, -) - - -_EXPLICITCONTENTANNOTATION = _descriptor.Descriptor( - name="ExplicitContentAnnotation", - full_name="google.cloud.videointelligence.v1p1beta1.ExplicitContentAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="frames", - full_name="google.cloud.videointelligence.v1p1beta1.ExplicitContentAnnotation.frames", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2224, - serialized_end=2331, -) - - -_VIDEOANNOTATIONRESULTS = _descriptor.Descriptor( - name="VideoAnnotationResults", - full_name="google.cloud.videointelligence.v1p1beta1.VideoAnnotationResults", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="input_uri", - full_name="google.cloud.videointelligence.v1p1beta1.VideoAnnotationResults.input_uri", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segment_label_annotations", - full_name="google.cloud.videointelligence.v1p1beta1.VideoAnnotationResults.segment_label_annotations", - index=1, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="shot_label_annotations", - full_name="google.cloud.videointelligence.v1p1beta1.VideoAnnotationResults.shot_label_annotations", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="frame_label_annotations", - full_name="google.cloud.videointelligence.v1p1beta1.VideoAnnotationResults.frame_label_annotations", - index=3, - number=4, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="shot_annotations", - full_name="google.cloud.videointelligence.v1p1beta1.VideoAnnotationResults.shot_annotations", - index=4, - number=6, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="explicit_annotation", - full_name="google.cloud.videointelligence.v1p1beta1.VideoAnnotationResults.explicit_annotation", - index=5, - number=7, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="speech_transcriptions", - full_name="google.cloud.videointelligence.v1p1beta1.VideoAnnotationResults.speech_transcriptions", - index=6, - number=11, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="error", - full_name="google.cloud.videointelligence.v1p1beta1.VideoAnnotationResults.error", - index=7, - number=9, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2334, - serialized_end=2963, -) - - -_ANNOTATEVIDEORESPONSE = _descriptor.Descriptor( - name="AnnotateVideoResponse", - full_name="google.cloud.videointelligence.v1p1beta1.AnnotateVideoResponse", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="annotation_results", - full_name="google.cloud.videointelligence.v1p1beta1.AnnotateVideoResponse.annotation_results", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2965, - serialized_end=3082, -) - - -_VIDEOANNOTATIONPROGRESS = _descriptor.Descriptor( - name="VideoAnnotationProgress", - full_name="google.cloud.videointelligence.v1p1beta1.VideoAnnotationProgress", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="input_uri", - full_name="google.cloud.videointelligence.v1p1beta1.VideoAnnotationProgress.input_uri", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="progress_percent", - full_name="google.cloud.videointelligence.v1p1beta1.VideoAnnotationProgress.progress_percent", - index=1, - number=2, - type=5, - cpp_type=1, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="start_time", - full_name="google.cloud.videointelligence.v1p1beta1.VideoAnnotationProgress.start_time", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="update_time", - full_name="google.cloud.videointelligence.v1p1beta1.VideoAnnotationProgress.update_time", - index=3, - number=4, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3085, - serialized_end=3252, -) - - -_ANNOTATEVIDEOPROGRESS = _descriptor.Descriptor( - name="AnnotateVideoProgress", - full_name="google.cloud.videointelligence.v1p1beta1.AnnotateVideoProgress", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="annotation_progress", - full_name="google.cloud.videointelligence.v1p1beta1.AnnotateVideoProgress.annotation_progress", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3254, - serialized_end=3373, -) - - -_SPEECHTRANSCRIPTIONCONFIG = _descriptor.Descriptor( - name="SpeechTranscriptionConfig", - full_name="google.cloud.videointelligence.v1p1beta1.SpeechTranscriptionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="language_code", - full_name="google.cloud.videointelligence.v1p1beta1.SpeechTranscriptionConfig.language_code", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\002", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="max_alternatives", - full_name="google.cloud.videointelligence.v1p1beta1.SpeechTranscriptionConfig.max_alternatives", - index=1, - number=2, - type=5, - cpp_type=1, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="filter_profanity", - full_name="google.cloud.videointelligence.v1p1beta1.SpeechTranscriptionConfig.filter_profanity", - index=2, - number=3, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="speech_contexts", - full_name="google.cloud.videointelligence.v1p1beta1.SpeechTranscriptionConfig.speech_contexts", - index=3, - number=4, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="enable_automatic_punctuation", - full_name="google.cloud.videointelligence.v1p1beta1.SpeechTranscriptionConfig.enable_automatic_punctuation", - index=4, - number=5, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="audio_tracks", - full_name="google.cloud.videointelligence.v1p1beta1.SpeechTranscriptionConfig.audio_tracks", - index=5, - number=6, - type=5, - cpp_type=1, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3376, - serialized_end=3650, -) - - -_SPEECHCONTEXT = _descriptor.Descriptor( - name="SpeechContext", - full_name="google.cloud.videointelligence.v1p1beta1.SpeechContext", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="phrases", - full_name="google.cloud.videointelligence.v1p1beta1.SpeechContext.phrases", - index=0, - number=1, - type=9, - cpp_type=9, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3652, - serialized_end=3689, -) - - -_SPEECHTRANSCRIPTION = _descriptor.Descriptor( - name="SpeechTranscription", - full_name="google.cloud.videointelligence.v1p1beta1.SpeechTranscription", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="alternatives", - full_name="google.cloud.videointelligence.v1p1beta1.SpeechTranscription.alternatives", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3691, - serialized_end=3806, -) - - -_SPEECHRECOGNITIONALTERNATIVE = _descriptor.Descriptor( - name="SpeechRecognitionAlternative", - full_name="google.cloud.videointelligence.v1p1beta1.SpeechRecognitionAlternative", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="transcript", - full_name="google.cloud.videointelligence.v1p1beta1.SpeechRecognitionAlternative.transcript", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1p1beta1.SpeechRecognitionAlternative.confidence", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\003", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="words", - full_name="google.cloud.videointelligence.v1p1beta1.SpeechRecognitionAlternative.words", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3809, - serialized_end=3951, -) - - -_WORDINFO = _descriptor.Descriptor( - name="WordInfo", - full_name="google.cloud.videointelligence.v1p1beta1.WordInfo", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="start_time", - full_name="google.cloud.videointelligence.v1p1beta1.WordInfo.start_time", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="end_time", - full_name="google.cloud.videointelligence.v1p1beta1.WordInfo.end_time", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="word", - full_name="google.cloud.videointelligence.v1p1beta1.WordInfo.word", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3953, - serialized_end=4069, -) - -_ANNOTATEVIDEOREQUEST.fields_by_name["features"].enum_type = _FEATURE -_ANNOTATEVIDEOREQUEST.fields_by_name["video_context"].message_type = _VIDEOCONTEXT -_VIDEOCONTEXT.fields_by_name["segments"].message_type = _VIDEOSEGMENT -_VIDEOCONTEXT.fields_by_name[ - "label_detection_config" -].message_type = _LABELDETECTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "shot_change_detection_config" -].message_type = _SHOTCHANGEDETECTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "explicit_content_detection_config" -].message_type = _EXPLICITCONTENTDETECTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "speech_transcription_config" -].message_type = _SPEECHTRANSCRIPTIONCONFIG -_LABELDETECTIONCONFIG.fields_by_name[ - "label_detection_mode" -].enum_type = _LABELDETECTIONMODE -_VIDEOSEGMENT.fields_by_name[ - "start_time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_VIDEOSEGMENT.fields_by_name[ - "end_time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_LABELSEGMENT.fields_by_name["segment"].message_type = _VIDEOSEGMENT -_LABELFRAME.fields_by_name[ - "time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_LABELANNOTATION.fields_by_name["entity"].message_type = _ENTITY -_LABELANNOTATION.fields_by_name["category_entities"].message_type = _ENTITY -_LABELANNOTATION.fields_by_name["segments"].message_type = _LABELSEGMENT -_LABELANNOTATION.fields_by_name["frames"].message_type = _LABELFRAME -_EXPLICITCONTENTFRAME.fields_by_name[ - "time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_EXPLICITCONTENTFRAME.fields_by_name["pornography_likelihood"].enum_type = _LIKELIHOOD -_EXPLICITCONTENTANNOTATION.fields_by_name["frames"].message_type = _EXPLICITCONTENTFRAME -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "segment_label_annotations" -].message_type = _LABELANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "shot_label_annotations" -].message_type = _LABELANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "frame_label_annotations" -].message_type = _LABELANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name["shot_annotations"].message_type = _VIDEOSEGMENT -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "explicit_annotation" -].message_type = _EXPLICITCONTENTANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "speech_transcriptions" -].message_type = _SPEECHTRANSCRIPTION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "error" -].message_type = google_dot_rpc_dot_status__pb2._STATUS -_ANNOTATEVIDEORESPONSE.fields_by_name[ - "annotation_results" -].message_type = _VIDEOANNOTATIONRESULTS -_VIDEOANNOTATIONPROGRESS.fields_by_name[ - "start_time" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -_VIDEOANNOTATIONPROGRESS.fields_by_name[ - "update_time" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -_ANNOTATEVIDEOPROGRESS.fields_by_name[ - "annotation_progress" -].message_type = _VIDEOANNOTATIONPROGRESS -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name[ - "speech_contexts" -].message_type = _SPEECHCONTEXT -_SPEECHTRANSCRIPTION.fields_by_name[ - "alternatives" -].message_type = _SPEECHRECOGNITIONALTERNATIVE -_SPEECHRECOGNITIONALTERNATIVE.fields_by_name["words"].message_type = _WORDINFO -_WORDINFO.fields_by_name[ - "start_time" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_WORDINFO.fields_by_name[ - "end_time" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -DESCRIPTOR.message_types_by_name["AnnotateVideoRequest"] = _ANNOTATEVIDEOREQUEST -DESCRIPTOR.message_types_by_name["VideoContext"] = _VIDEOCONTEXT -DESCRIPTOR.message_types_by_name["LabelDetectionConfig"] = _LABELDETECTIONCONFIG -DESCRIPTOR.message_types_by_name[ - "ShotChangeDetectionConfig" -] = _SHOTCHANGEDETECTIONCONFIG -DESCRIPTOR.message_types_by_name[ - "ExplicitContentDetectionConfig" -] = _EXPLICITCONTENTDETECTIONCONFIG -DESCRIPTOR.message_types_by_name["VideoSegment"] = _VIDEOSEGMENT -DESCRIPTOR.message_types_by_name["LabelSegment"] = _LABELSEGMENT -DESCRIPTOR.message_types_by_name["LabelFrame"] = _LABELFRAME -DESCRIPTOR.message_types_by_name["Entity"] = _ENTITY -DESCRIPTOR.message_types_by_name["LabelAnnotation"] = _LABELANNOTATION -DESCRIPTOR.message_types_by_name["ExplicitContentFrame"] = _EXPLICITCONTENTFRAME -DESCRIPTOR.message_types_by_name[ - "ExplicitContentAnnotation" -] = _EXPLICITCONTENTANNOTATION -DESCRIPTOR.message_types_by_name["VideoAnnotationResults"] = _VIDEOANNOTATIONRESULTS -DESCRIPTOR.message_types_by_name["AnnotateVideoResponse"] = _ANNOTATEVIDEORESPONSE -DESCRIPTOR.message_types_by_name["VideoAnnotationProgress"] = _VIDEOANNOTATIONPROGRESS -DESCRIPTOR.message_types_by_name["AnnotateVideoProgress"] = _ANNOTATEVIDEOPROGRESS -DESCRIPTOR.message_types_by_name[ - "SpeechTranscriptionConfig" -] = _SPEECHTRANSCRIPTIONCONFIG -DESCRIPTOR.message_types_by_name["SpeechContext"] = _SPEECHCONTEXT -DESCRIPTOR.message_types_by_name["SpeechTranscription"] = _SPEECHTRANSCRIPTION -DESCRIPTOR.message_types_by_name[ - "SpeechRecognitionAlternative" -] = _SPEECHRECOGNITIONALTERNATIVE -DESCRIPTOR.message_types_by_name["WordInfo"] = _WORDINFO -DESCRIPTOR.enum_types_by_name["Feature"] = _FEATURE -DESCRIPTOR.enum_types_by_name["LabelDetectionMode"] = _LABELDETECTIONMODE -DESCRIPTOR.enum_types_by_name["Likelihood"] = _LIKELIHOOD -_sym_db.RegisterFileDescriptor(DESCRIPTOR) - -AnnotateVideoRequest = _reflection.GeneratedProtocolMessageType( - "AnnotateVideoRequest", - (_message.Message,), - { - "DESCRIPTOR": _ANNOTATEVIDEOREQUEST, - "__module__": "google.cloud.videointelligence_v1p1beta1.proto.video_intelligence_pb2", - "__doc__": """Video annotation request. - - - Attributes: - input_uri: - Input video location. Currently, only `Google Cloud Storage - `__ URIs are supported, - which must be specified in the following format: - ``gs://bucket-id/object-id`` (other URI formats return [google - .rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]) - . For more information, see `Request URIs - `__. - A video URI may include wildcards in ``object-id``, and thus - identify multiple videos. Supported wildcards: ’*’ to match 0 - or more characters; ‘?’ to match 1 character. If unset, the - input video should be embedded in the request as - ``input_content``. If set, ``input_content`` should be unset. - input_content: - The video data bytes. If unset, the input video(s) should be - specified via ``input_uri``. If set, ``input_uri`` should be - unset. - features: - Required. Requested video annotation features. - video_context: - Additional video context and/or feature-specific parameters. - output_uri: - Optional. Location where the output (in JSON format) should be - stored. Currently, only `Google Cloud Storage - `__ URIs are supported, - which must be specified in the following format: - ``gs://bucket-id/object-id`` (other URI formats return [google - .rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]) - . For more information, see `Request URIs - `__. - location_id: - Optional. Cloud region where annotation should take place. - Supported cloud regions: ``us-east1``, ``us-west1``, ``europe- - west1``, ``asia-east1``. If no region is specified, a region - will be determined based on video file location. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p1beta1.AnnotateVideoRequest) - }, -) -_sym_db.RegisterMessage(AnnotateVideoRequest) - -VideoContext = _reflection.GeneratedProtocolMessageType( - "VideoContext", - (_message.Message,), - { - "DESCRIPTOR": _VIDEOCONTEXT, - "__module__": "google.cloud.videointelligence_v1p1beta1.proto.video_intelligence_pb2", - "__doc__": """Video context and/or feature-specific parameters. - - - Attributes: - segments: - Video segments to annotate. The segments may overlap and are - not required to be contiguous or span the whole video. If - unspecified, each video is treated as a single segment. - label_detection_config: - Config for LABEL_DETECTION. - shot_change_detection_config: - Config for SHOT_CHANGE_DETECTION. - explicit_content_detection_config: - Config for EXPLICIT_CONTENT_DETECTION. - speech_transcription_config: - Config for SPEECH_TRANSCRIPTION. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p1beta1.VideoContext) - }, -) -_sym_db.RegisterMessage(VideoContext) - -LabelDetectionConfig = _reflection.GeneratedProtocolMessageType( - "LabelDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _LABELDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1p1beta1.proto.video_intelligence_pb2", - "__doc__": """Config for LABEL_DETECTION. - - - Attributes: - label_detection_mode: - What labels should be detected with LABEL_DETECTION, in - addition to video-level labels or segment-level labels. If - unspecified, defaults to ``SHOT_MODE``. - stationary_camera: - Whether the video has been shot from a stationary (i.e. non- - moving) camera. When set to true, might improve detection - accuracy for moving objects. Should be used with - ``SHOT_AND_FRAME_MODE`` enabled. - model: - Model to use for label detection. Supported values: - “builtin/stable” (the default if unset) and “builtin/latest”. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p1beta1.LabelDetectionConfig) - }, -) -_sym_db.RegisterMessage(LabelDetectionConfig) - -ShotChangeDetectionConfig = _reflection.GeneratedProtocolMessageType( - "ShotChangeDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _SHOTCHANGEDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1p1beta1.proto.video_intelligence_pb2", - "__doc__": """Config for SHOT_CHANGE_DETECTION. - - - Attributes: - model: - Model to use for shot change detection. Supported values: - “builtin/stable” (the default if unset) and “builtin/latest”. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p1beta1.ShotChangeDetectionConfig) - }, -) -_sym_db.RegisterMessage(ShotChangeDetectionConfig) - -ExplicitContentDetectionConfig = _reflection.GeneratedProtocolMessageType( - "ExplicitContentDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _EXPLICITCONTENTDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1p1beta1.proto.video_intelligence_pb2", - "__doc__": """Config for EXPLICIT_CONTENT_DETECTION. - - - Attributes: - model: - Model to use for explicit content detection. Supported values: - “builtin/stable” (the default if unset) and “builtin/latest”. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p1beta1.ExplicitContentDetectionConfig) - }, -) -_sym_db.RegisterMessage(ExplicitContentDetectionConfig) - -VideoSegment = _reflection.GeneratedProtocolMessageType( - "VideoSegment", - (_message.Message,), - { - "DESCRIPTOR": _VIDEOSEGMENT, - "__module__": "google.cloud.videointelligence_v1p1beta1.proto.video_intelligence_pb2", - "__doc__": """Video segment. - - - Attributes: - start_time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the start of the segment (inclusive). - end_time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the end of the segment (inclusive). - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p1beta1.VideoSegment) - }, -) -_sym_db.RegisterMessage(VideoSegment) - -LabelSegment = _reflection.GeneratedProtocolMessageType( - "LabelSegment", - (_message.Message,), - { - "DESCRIPTOR": _LABELSEGMENT, - "__module__": "google.cloud.videointelligence_v1p1beta1.proto.video_intelligence_pb2", - "__doc__": """Video segment level annotation results for label detection. - - - Attributes: - segment: - Video segment where a label was detected. - confidence: - Confidence that the label is accurate. Range: [0, 1]. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p1beta1.LabelSegment) - }, -) -_sym_db.RegisterMessage(LabelSegment) - -LabelFrame = _reflection.GeneratedProtocolMessageType( - "LabelFrame", - (_message.Message,), - { - "DESCRIPTOR": _LABELFRAME, - "__module__": "google.cloud.videointelligence_v1p1beta1.proto.video_intelligence_pb2", - "__doc__": """Video frame level annotation results for label detection. - - - Attributes: - time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the video frame for this location. - confidence: - Confidence that the label is accurate. Range: [0, 1]. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p1beta1.LabelFrame) - }, -) -_sym_db.RegisterMessage(LabelFrame) - -Entity = _reflection.GeneratedProtocolMessageType( - "Entity", - (_message.Message,), - { - "DESCRIPTOR": _ENTITY, - "__module__": "google.cloud.videointelligence_v1p1beta1.proto.video_intelligence_pb2", - "__doc__": """Detected entity from video analysis. - - - Attributes: - entity_id: - Opaque entity ID. Some IDs may be available in `Google - Knowledge Graph Search API - `__. - description: - Textual description, e.g. ``Fixed-gear bicycle``. - language_code: - Language code for ``description`` in BCP-47 format. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p1beta1.Entity) - }, -) -_sym_db.RegisterMessage(Entity) - -LabelAnnotation = _reflection.GeneratedProtocolMessageType( - "LabelAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _LABELANNOTATION, - "__module__": "google.cloud.videointelligence_v1p1beta1.proto.video_intelligence_pb2", - "__doc__": """Label annotation. - - - Attributes: - entity: - Detected entity. - category_entities: - Common categories for the detected entity. E.g. when the label - is ``Terrier`` the category is likely ``dog``. And in some - cases there might be more than one categories e.g. ``Terrier`` - could also be a ``pet``. - segments: - All video segments where a label was detected. - frames: - All video frames where a label was detected. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p1beta1.LabelAnnotation) - }, -) -_sym_db.RegisterMessage(LabelAnnotation) - -ExplicitContentFrame = _reflection.GeneratedProtocolMessageType( - "ExplicitContentFrame", - (_message.Message,), - { - "DESCRIPTOR": _EXPLICITCONTENTFRAME, - "__module__": "google.cloud.videointelligence_v1p1beta1.proto.video_intelligence_pb2", - "__doc__": """Video frame level annotation results for explicit content. - - - Attributes: - time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the video frame for this location. - pornography_likelihood: - Likelihood of the pornography content.. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p1beta1.ExplicitContentFrame) - }, -) -_sym_db.RegisterMessage(ExplicitContentFrame) - -ExplicitContentAnnotation = _reflection.GeneratedProtocolMessageType( - "ExplicitContentAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _EXPLICITCONTENTANNOTATION, - "__module__": "google.cloud.videointelligence_v1p1beta1.proto.video_intelligence_pb2", - "__doc__": """Explicit content annotation (based on per-frame visual signals only). - If no explicit content has been detected in a frame, no annotations - are present for that frame. - - - Attributes: - frames: - All video frames where explicit content was detected. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p1beta1.ExplicitContentAnnotation) - }, -) -_sym_db.RegisterMessage(ExplicitContentAnnotation) - -VideoAnnotationResults = _reflection.GeneratedProtocolMessageType( - "VideoAnnotationResults", - (_message.Message,), - { - "DESCRIPTOR": _VIDEOANNOTATIONRESULTS, - "__module__": "google.cloud.videointelligence_v1p1beta1.proto.video_intelligence_pb2", - "__doc__": """Annotation results for a single video. - - - Attributes: - input_uri: - Output only. Video file location in `Google Cloud Storage - `__. - segment_label_annotations: - Label annotations on video level or user specified segment - level. There is exactly one element for each unique label. - shot_label_annotations: - Label annotations on shot level. There is exactly one element - for each unique label. - frame_label_annotations: - Label annotations on frame level. There is exactly one element - for each unique label. - shot_annotations: - Shot annotations. Each shot is represented as a video segment. - explicit_annotation: - Explicit content annotation. - speech_transcriptions: - Speech transcription. - error: - Output only. If set, indicates an error. Note that for a - single ``AnnotateVideoRequest`` some videos may succeed and - some may fail. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p1beta1.VideoAnnotationResults) - }, -) -_sym_db.RegisterMessage(VideoAnnotationResults) - -AnnotateVideoResponse = _reflection.GeneratedProtocolMessageType( - "AnnotateVideoResponse", - (_message.Message,), - { - "DESCRIPTOR": _ANNOTATEVIDEORESPONSE, - "__module__": "google.cloud.videointelligence_v1p1beta1.proto.video_intelligence_pb2", - "__doc__": """Video annotation response. Included in the ``response`` field of the - ``Operation`` returned by the ``GetOperation`` call of the - ``google::longrunning::Operations`` service. - - - Attributes: - annotation_results: - Annotation results for all videos specified in - ``AnnotateVideoRequest``. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p1beta1.AnnotateVideoResponse) - }, -) -_sym_db.RegisterMessage(AnnotateVideoResponse) - -VideoAnnotationProgress = _reflection.GeneratedProtocolMessageType( - "VideoAnnotationProgress", - (_message.Message,), - { - "DESCRIPTOR": _VIDEOANNOTATIONPROGRESS, - "__module__": "google.cloud.videointelligence_v1p1beta1.proto.video_intelligence_pb2", - "__doc__": """Annotation progress for a single video. - - - Attributes: - input_uri: - Output only. Video file location in `Google Cloud Storage - `__. - progress_percent: - Output only. Approximate percentage processed thus far. - Guaranteed to be 100 when fully processed. - start_time: - Output only. Time when the request was received. - update_time: - Output only. Time of the most recent update. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p1beta1.VideoAnnotationProgress) - }, -) -_sym_db.RegisterMessage(VideoAnnotationProgress) - -AnnotateVideoProgress = _reflection.GeneratedProtocolMessageType( - "AnnotateVideoProgress", - (_message.Message,), - { - "DESCRIPTOR": _ANNOTATEVIDEOPROGRESS, - "__module__": "google.cloud.videointelligence_v1p1beta1.proto.video_intelligence_pb2", - "__doc__": """Video annotation progress. Included in the ``metadata`` field of the - ``Operation`` returned by the ``GetOperation`` call of the - ``google::longrunning::Operations`` service. - - - Attributes: - annotation_progress: - Progress metadata for all videos specified in - ``AnnotateVideoRequest``. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p1beta1.AnnotateVideoProgress) - }, -) -_sym_db.RegisterMessage(AnnotateVideoProgress) - -SpeechTranscriptionConfig = _reflection.GeneratedProtocolMessageType( - "SpeechTranscriptionConfig", - (_message.Message,), - { - "DESCRIPTOR": _SPEECHTRANSCRIPTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1p1beta1.proto.video_intelligence_pb2", - "__doc__": """Config for SPEECH_TRANSCRIPTION. - - - Attributes: - language_code: - Required. *Required* The language of the supplied audio as a - `BCP-47 `__ - language tag. Example: “en-US”. See `Language Support - `__ for a list - of the currently supported language codes. - max_alternatives: - Optional. Maximum number of recognition hypotheses to be - returned. Specifically, the maximum number of - ``SpeechRecognitionAlternative`` messages within each - ``SpeechTranscription``. The server may return fewer than - ``max_alternatives``. Valid values are ``0``-``30``. A value - of ``0`` or ``1`` will return a maximum of one. If omitted, - will return a maximum of one. - filter_profanity: - Optional. If set to ``true``, the server will attempt to - filter out profanities, replacing all but the initial - character in each filtered word with asterisks, e.g. "f***". - If set to ``false`` or omitted, profanities won’t be filtered - out. - speech_contexts: - Optional. A means to provide context to assist the speech - recognition. - enable_automatic_punctuation: - Optional. If ‘true’, adds punctuation to recognition result - hypotheses. This feature is only available in select - languages. Setting this for requests in other languages has no - effect at all. The default ‘false’ value does not add - punctuation to result hypotheses. NOTE: “This is currently - offered as an experimental service, complimentary to all - users. In the future this may be exclusively available as a - premium feature.” - audio_tracks: - Optional. For file formats, such as MXF or MKV, supporting - multiple audio tracks, specify up to two tracks. Default: - track 0. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p1beta1.SpeechTranscriptionConfig) - }, -) -_sym_db.RegisterMessage(SpeechTranscriptionConfig) - -SpeechContext = _reflection.GeneratedProtocolMessageType( - "SpeechContext", - (_message.Message,), - { - "DESCRIPTOR": _SPEECHCONTEXT, - "__module__": "google.cloud.videointelligence_v1p1beta1.proto.video_intelligence_pb2", - "__doc__": """Provides “hints” to the speech recognizer to favor specific words and - phrases in the results. - - - Attributes: - phrases: - Optional. A list of strings containing words and phrases - “hints” so that the speech recognition is more likely to - recognize them. This can be used to improve the accuracy for - specific words and phrases, for example, if specific commands - are typically spoken by the user. This can also be used to add - additional words to the vocabulary of the recognizer. See - `usage limits - `__. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p1beta1.SpeechContext) - }, -) -_sym_db.RegisterMessage(SpeechContext) - -SpeechTranscription = _reflection.GeneratedProtocolMessageType( - "SpeechTranscription", - (_message.Message,), - { - "DESCRIPTOR": _SPEECHTRANSCRIPTION, - "__module__": "google.cloud.videointelligence_v1p1beta1.proto.video_intelligence_pb2", - "__doc__": """A speech recognition result corresponding to a portion of the audio. - - - Attributes: - alternatives: - May contain one or more recognition hypotheses (up to the - maximum specified in ``max_alternatives``). These alternatives - are ordered in terms of accuracy, with the top (first) - alternative being the most probable, as ranked by the - recognizer. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p1beta1.SpeechTranscription) - }, -) -_sym_db.RegisterMessage(SpeechTranscription) - -SpeechRecognitionAlternative = _reflection.GeneratedProtocolMessageType( - "SpeechRecognitionAlternative", - (_message.Message,), - { - "DESCRIPTOR": _SPEECHRECOGNITIONALTERNATIVE, - "__module__": "google.cloud.videointelligence_v1p1beta1.proto.video_intelligence_pb2", - "__doc__": """Alternative hypotheses (a.k.a. n-best list). - - - Attributes: - transcript: - Output only. Transcript text representing the words that the - user spoke. - confidence: - Output only. The confidence estimate between 0.0 and 1.0. A - higher number indicates an estimated greater likelihood that - the recognized words are correct. This field is set only for - the top alternative. This field is not guaranteed to be - accurate and users should not rely on it to be always - provided. The default of 0.0 is a sentinel value indicating - ``confidence`` was not set. - words: - Output only. A list of word-specific information for each - recognized word. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p1beta1.SpeechRecognitionAlternative) - }, -) -_sym_db.RegisterMessage(SpeechRecognitionAlternative) - -WordInfo = _reflection.GeneratedProtocolMessageType( - "WordInfo", - (_message.Message,), - { - "DESCRIPTOR": _WORDINFO, - "__module__": "google.cloud.videointelligence_v1p1beta1.proto.video_intelligence_pb2", - "__doc__": """Word-specific information for recognized words. Word information is - only included in the response when certain request parameters are set, - such as ``enable_word_time_offsets``. - - - Attributes: - start_time: - Output only. Time offset relative to the beginning of the - audio, and corresponding to the start of the spoken word. This - field is only set if ``enable_word_time_offsets=true`` and - only in the top hypothesis. This is an experimental feature - and the accuracy of the time offset can vary. - end_time: - Output only. Time offset relative to the beginning of the - audio, and corresponding to the end of the spoken word. This - field is only set if ``enable_word_time_offsets=true`` and - only in the top hypothesis. This is an experimental feature - and the accuracy of the time offset can vary. - word: - Output only. The word corresponding to this set of - information. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p1beta1.WordInfo) - }, -) -_sym_db.RegisterMessage(WordInfo) - - -DESCRIPTOR._options = None -_ANNOTATEVIDEOREQUEST.fields_by_name["features"]._options = None -_ANNOTATEVIDEOREQUEST.fields_by_name["output_uri"]._options = None -_ANNOTATEVIDEOREQUEST.fields_by_name["location_id"]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name["language_code"]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name["max_alternatives"]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name["filter_profanity"]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name["speech_contexts"]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name[ - "enable_automatic_punctuation" -]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name["audio_tracks"]._options = None -_SPEECHCONTEXT.fields_by_name["phrases"]._options = None -_SPEECHRECOGNITIONALTERNATIVE.fields_by_name["confidence"]._options = None - -_VIDEOINTELLIGENCESERVICE = _descriptor.ServiceDescriptor( - name="VideoIntelligenceService", - full_name="google.cloud.videointelligence.v1p1beta1.VideoIntelligenceService", - file=DESCRIPTOR, - index=0, - serialized_options=b"\312A videointelligence.googleapis.com\322A.https://www.googleapis.com/auth/cloud-platform", - create_key=_descriptor._internal_create_key, - serialized_start=4449, - serialized_end=4783, - methods=[ - _descriptor.MethodDescriptor( - name="AnnotateVideo", - full_name="google.cloud.videointelligence.v1p1beta1.VideoIntelligenceService.AnnotateVideo", - index=0, - containing_service=None, - input_type=_ANNOTATEVIDEOREQUEST, - output_type=google_dot_longrunning_dot_operations__pb2._OPERATION, - serialized_options=b'\202\323\344\223\002\037"\032/v1p1beta1/videos:annotate:\001*\332A\022input_uri,features\312A.\n\025AnnotateVideoResponse\022\025AnnotateVideoProgress', - create_key=_descriptor._internal_create_key, - ), - ], -) -_sym_db.RegisterServiceDescriptor(_VIDEOINTELLIGENCESERVICE) - -DESCRIPTOR.services_by_name["VideoIntelligenceService"] = _VIDEOINTELLIGENCESERVICE - -# @@protoc_insertion_point(module_scope) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/proto/video_intelligence_pb2_grpc.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/proto/video_intelligence_pb2_grpc.py deleted file mode 100644 index a17640d625ce..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/proto/video_intelligence_pb2_grpc.py +++ /dev/null @@ -1,92 +0,0 @@ -# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! -"""Client and server classes corresponding to protobuf-defined services.""" -import grpc - -from google.cloud.videointelligence_v1p1beta1.proto import ( - video_intelligence_pb2 as google_dot_cloud_dot_videointelligence__v1p1beta1_dot_proto_dot_video__intelligence__pb2, -) -from google.longrunning import ( - operations_pb2 as google_dot_longrunning_dot_operations__pb2, -) - - -class VideoIntelligenceServiceStub(object): - """Service that implements Google Cloud Video Intelligence API. - """ - - def __init__(self, channel): - """Constructor. - - Args: - channel: A grpc.Channel. - """ - self.AnnotateVideo = channel.unary_unary( - "/google.cloud.videointelligence.v1p1beta1.VideoIntelligenceService/AnnotateVideo", - request_serializer=google_dot_cloud_dot_videointelligence__v1p1beta1_dot_proto_dot_video__intelligence__pb2.AnnotateVideoRequest.SerializeToString, - response_deserializer=google_dot_longrunning_dot_operations__pb2.Operation.FromString, - ) - - -class VideoIntelligenceServiceServicer(object): - """Service that implements Google Cloud Video Intelligence API. - """ - - def AnnotateVideo(self, request, context): - """Performs asynchronous video annotation. Progress and results can be - retrieved through the `google.longrunning.Operations` interface. - `Operation.metadata` contains `AnnotateVideoProgress` (progress). - `Operation.response` contains `AnnotateVideoResponse` (results). - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - -def add_VideoIntelligenceServiceServicer_to_server(servicer, server): - rpc_method_handlers = { - "AnnotateVideo": grpc.unary_unary_rpc_method_handler( - servicer.AnnotateVideo, - request_deserializer=google_dot_cloud_dot_videointelligence__v1p1beta1_dot_proto_dot_video__intelligence__pb2.AnnotateVideoRequest.FromString, - response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString, - ), - } - generic_handler = grpc.method_handlers_generic_handler( - "google.cloud.videointelligence.v1p1beta1.VideoIntelligenceService", - rpc_method_handlers, - ) - server.add_generic_rpc_handlers((generic_handler,)) - - -# This class is part of an EXPERIMENTAL API. -class VideoIntelligenceService(object): - """Service that implements Google Cloud Video Intelligence API. - """ - - @staticmethod - def AnnotateVideo( - request, - target, - options=(), - channel_credentials=None, - call_credentials=None, - insecure=False, - compression=None, - wait_for_ready=None, - timeout=None, - metadata=None, - ): - return grpc.experimental.unary_unary( - request, - target, - "/google.cloud.videointelligence.v1p1beta1.VideoIntelligenceService/AnnotateVideo", - google_dot_cloud_dot_videointelligence__v1p1beta1_dot_proto_dot_video__intelligence__pb2.AnnotateVideoRequest.SerializeToString, - google_dot_longrunning_dot_operations__pb2.Operation.FromString, - options, - channel_credentials, - insecure, - call_credentials, - compression, - wait_for_ready, - timeout, - metadata, - ) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/py.typed b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/py.typed new file mode 100644 index 000000000000..e7fb166bf3e5 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/py.typed @@ -0,0 +1,2 @@ +# Marker file for PEP 561. +# The google-cloud-videointelligence package uses inline types. diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/__init__.py new file mode 100644 index 000000000000..42ffdf2bc43d --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/__init__.py @@ -0,0 +1,16 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/__init__.py new file mode 100644 index 000000000000..559320286ca2 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/__init__.py @@ -0,0 +1,24 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from .client import VideoIntelligenceServiceClient +from .async_client import VideoIntelligenceServiceAsyncClient + +__all__ = ( + "VideoIntelligenceServiceClient", + "VideoIntelligenceServiceAsyncClient", +) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/async_client.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/async_client.py new file mode 100644 index 000000000000..b7a986b7f005 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/async_client.py @@ -0,0 +1,270 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from collections import OrderedDict +import functools +import re +from typing import Dict, Sequence, Tuple, Type, Union +import pkg_resources + +import google.api_core.client_options as ClientOptions # type: ignore +from google.api_core import exceptions # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google.api_core import retry as retries # type: ignore +from google.auth import credentials # type: ignore +from google.oauth2 import service_account # type: ignore + +from google.api_core import operation # type: ignore +from google.api_core import operation_async # type: ignore +from google.cloud.videointelligence_v1p1beta1.types import video_intelligence + +from .transports.base import VideoIntelligenceServiceTransport, DEFAULT_CLIENT_INFO +from .transports.grpc_asyncio import VideoIntelligenceServiceGrpcAsyncIOTransport +from .client import VideoIntelligenceServiceClient + + +class VideoIntelligenceServiceAsyncClient: + """Service that implements Google Cloud Video Intelligence API.""" + + _client: VideoIntelligenceServiceClient + + DEFAULT_ENDPOINT = VideoIntelligenceServiceClient.DEFAULT_ENDPOINT + DEFAULT_MTLS_ENDPOINT = VideoIntelligenceServiceClient.DEFAULT_MTLS_ENDPOINT + + common_billing_account_path = staticmethod( + VideoIntelligenceServiceClient.common_billing_account_path + ) + parse_common_billing_account_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_billing_account_path + ) + + common_folder_path = staticmethod(VideoIntelligenceServiceClient.common_folder_path) + parse_common_folder_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_folder_path + ) + + common_organization_path = staticmethod( + VideoIntelligenceServiceClient.common_organization_path + ) + parse_common_organization_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_organization_path + ) + + common_project_path = staticmethod( + VideoIntelligenceServiceClient.common_project_path + ) + parse_common_project_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_project_path + ) + + common_location_path = staticmethod( + VideoIntelligenceServiceClient.common_location_path + ) + parse_common_location_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_location_path + ) + + from_service_account_file = VideoIntelligenceServiceClient.from_service_account_file + from_service_account_json = from_service_account_file + + @property + def transport(self) -> VideoIntelligenceServiceTransport: + """Return the transport used by the client instance. + + Returns: + VideoIntelligenceServiceTransport: The transport used by the client instance. + """ + return self._client.transport + + get_transport_class = functools.partial( + type(VideoIntelligenceServiceClient).get_transport_class, + type(VideoIntelligenceServiceClient), + ) + + def __init__( + self, + *, + credentials: credentials.Credentials = None, + transport: Union[str, VideoIntelligenceServiceTransport] = "grpc_asyncio", + client_options: ClientOptions = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the video intelligence service client. + + Args: + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + transport (Union[str, ~.VideoIntelligenceServiceTransport]): The + transport to use. If set to None, a transport is chosen + automatically. + client_options (ClientOptions): Custom options for the client. It + won't take effect if a ``transport`` instance is provided. + (1) The ``api_endpoint`` property can be used to override the + default endpoint provided by the client. GOOGLE_API_USE_MTLS_ENDPOINT + environment variable can also be used to override the endpoint: + "always" (always use the default mTLS endpoint), "never" (always + use the default regular endpoint) and "auto" (auto switch to the + default mTLS endpoint if client certificate is present, this is + the default value). However, the ``api_endpoint`` property takes + precedence if provided. + (2) If GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable + is "true", then the ``client_cert_source`` property can be used + to provide client certificate for mutual TLS transport. If + not provided, the default SSL client certificate will be used if + present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not + set, no client certificate will be used. + + Raises: + google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport + creation failed for any reason. + """ + + self._client = VideoIntelligenceServiceClient( + credentials=credentials, + transport=transport, + client_options=client_options, + client_info=client_info, + ) + + async def annotate_video( + self, + request: video_intelligence.AnnotateVideoRequest = None, + *, + input_uri: str = None, + features: Sequence[video_intelligence.Feature] = None, + retry: retries.Retry = gapic_v1.method.DEFAULT, + timeout: float = None, + metadata: Sequence[Tuple[str, str]] = (), + ) -> operation_async.AsyncOperation: + r"""Performs asynchronous video annotation. Progress and results can + be retrieved through the ``google.longrunning.Operations`` + interface. ``Operation.metadata`` contains + ``AnnotateVideoProgress`` (progress). ``Operation.response`` + contains ``AnnotateVideoResponse`` (results). + + Args: + request (:class:`~.video_intelligence.AnnotateVideoRequest`): + The request object. Video annotation request. + input_uri (:class:`str`): + Input video location. Currently, only `Google Cloud + Storage `__ URIs are + supported, which must be specified in the following + format: ``gs://bucket-id/object-id`` (other URI formats + return + [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). + For more information, see `Request + URIs `__. + A video URI may include wildcards in ``object-id``, and + thus identify multiple videos. Supported wildcards: '*' + to match 0 or more characters; '?' to match 1 character. + If unset, the input video should be embedded in the + request as ``input_content``. If set, ``input_content`` + should be unset. + This corresponds to the ``input_uri`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + features (:class:`Sequence[~.video_intelligence.Feature]`): + Required. Requested video annotation + features. + This corresponds to the ``features`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, str]]): Strings which should be + sent along with the request as metadata. + + Returns: + ~.operation_async.AsyncOperation: + An object representing a long-running operation. + + The result type for the operation will be + :class:``~.video_intelligence.AnnotateVideoResponse``: + Video annotation response. Included in the ``response`` + field of the ``Operation`` returned by the + ``GetOperation`` call of the + ``google::longrunning::Operations`` service. + + """ + # Create or coerce a protobuf request object. + # Sanity check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + has_flattened_params = any([input_uri, features]) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + request = video_intelligence.AnnotateVideoRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + + if input_uri is not None: + request.input_uri = input_uri + + if features: + request.features.extend(features) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = gapic_v1.method_async.wrap_method( + self._client._transport.annotate_video, + default_retry=retries.Retry( + initial=1.0, + maximum=120.0, + multiplier=2.5, + predicate=retries.if_exception_type( + exceptions.DeadlineExceeded, exceptions.ServiceUnavailable, + ), + ), + default_timeout=600.0, + client_info=DEFAULT_CLIENT_INFO, + ) + + # Send the request. + response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,) + + # Wrap the response in an operation future. + response = operation_async.from_gapic( + response, + self._client._transport.operations_client, + video_intelligence.AnnotateVideoResponse, + metadata_type=video_intelligence.AnnotateVideoProgress, + ) + + # Done; return the response. + return response + + +try: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=pkg_resources.get_distribution( + "google-cloud-videointelligence", + ).version, + ) +except pkg_resources.DistributionNotFound: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo() + + +__all__ = ("VideoIntelligenceServiceAsyncClient",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/client.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/client.py new file mode 100644 index 000000000000..ba001598fa48 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/client.py @@ -0,0 +1,438 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from collections import OrderedDict +from distutils import util +import os +import re +from typing import Callable, Dict, Optional, Sequence, Tuple, Type, Union +import pkg_resources + +from google.api_core import client_options as client_options_lib # type: ignore +from google.api_core import exceptions # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google.api_core import retry as retries # type: ignore +from google.auth import credentials # type: ignore +from google.auth.transport import mtls # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore +from google.auth.exceptions import MutualTLSChannelError # type: ignore +from google.oauth2 import service_account # type: ignore + +from google.api_core import operation # type: ignore +from google.api_core import operation_async # type: ignore +from google.cloud.videointelligence_v1p1beta1.types import video_intelligence + +from .transports.base import VideoIntelligenceServiceTransport, DEFAULT_CLIENT_INFO +from .transports.grpc import VideoIntelligenceServiceGrpcTransport +from .transports.grpc_asyncio import VideoIntelligenceServiceGrpcAsyncIOTransport + + +class VideoIntelligenceServiceClientMeta(type): + """Metaclass for the VideoIntelligenceService client. + + This provides class-level methods for building and retrieving + support objects (e.g. transport) without polluting the client instance + objects. + """ + + _transport_registry = ( + OrderedDict() + ) # type: Dict[str, Type[VideoIntelligenceServiceTransport]] + _transport_registry["grpc"] = VideoIntelligenceServiceGrpcTransport + _transport_registry["grpc_asyncio"] = VideoIntelligenceServiceGrpcAsyncIOTransport + + def get_transport_class( + cls, label: str = None, + ) -> Type[VideoIntelligenceServiceTransport]: + """Return an appropriate transport class. + + Args: + label: The name of the desired transport. If none is + provided, then the first transport in the registry is used. + + Returns: + The transport class to use. + """ + # If a specific transport is requested, return that one. + if label: + return cls._transport_registry[label] + + # No transport is requested; return the default (that is, the first one + # in the dictionary). + return next(iter(cls._transport_registry.values())) + + +class VideoIntelligenceServiceClient(metaclass=VideoIntelligenceServiceClientMeta): + """Service that implements Google Cloud Video Intelligence API.""" + + @staticmethod + def _get_default_mtls_endpoint(api_endpoint): + """Convert api endpoint to mTLS endpoint. + Convert "*.sandbox.googleapis.com" and "*.googleapis.com" to + "*.mtls.sandbox.googleapis.com" and "*.mtls.googleapis.com" respectively. + Args: + api_endpoint (Optional[str]): the api endpoint to convert. + Returns: + str: converted mTLS api endpoint. + """ + if not api_endpoint: + return api_endpoint + + mtls_endpoint_re = re.compile( + r"(?P[^.]+)(?P\.mtls)?(?P\.sandbox)?(?P\.googleapis\.com)?" + ) + + m = mtls_endpoint_re.match(api_endpoint) + name, mtls, sandbox, googledomain = m.groups() + if mtls or not googledomain: + return api_endpoint + + if sandbox: + return api_endpoint.replace( + "sandbox.googleapis.com", "mtls.sandbox.googleapis.com" + ) + + return api_endpoint.replace(".googleapis.com", ".mtls.googleapis.com") + + DEFAULT_ENDPOINT = "videointelligence.googleapis.com" + DEFAULT_MTLS_ENDPOINT = _get_default_mtls_endpoint.__func__( # type: ignore + DEFAULT_ENDPOINT + ) + + @classmethod + def from_service_account_file(cls, filename: str, *args, **kwargs): + """Creates an instance of this client using the provided credentials + file. + + Args: + filename (str): The path to the service account private key json + file. + args: Additional arguments to pass to the constructor. + kwargs: Additional arguments to pass to the constructor. + + Returns: + {@api.name}: The constructed client. + """ + credentials = service_account.Credentials.from_service_account_file(filename) + kwargs["credentials"] = credentials + return cls(*args, **kwargs) + + from_service_account_json = from_service_account_file + + @property + def transport(self) -> VideoIntelligenceServiceTransport: + """Return the transport used by the client instance. + + Returns: + VideoIntelligenceServiceTransport: The transport used by the client instance. + """ + return self._transport + + @staticmethod + def common_billing_account_path(billing_account: str,) -> str: + """Return a fully-qualified billing_account string.""" + return "billingAccounts/{billing_account}".format( + billing_account=billing_account, + ) + + @staticmethod + def parse_common_billing_account_path(path: str) -> Dict[str, str]: + """Parse a billing_account path into its component segments.""" + m = re.match(r"^billingAccounts/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_folder_path(folder: str,) -> str: + """Return a fully-qualified folder string.""" + return "folders/{folder}".format(folder=folder,) + + @staticmethod + def parse_common_folder_path(path: str) -> Dict[str, str]: + """Parse a folder path into its component segments.""" + m = re.match(r"^folders/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_organization_path(organization: str,) -> str: + """Return a fully-qualified organization string.""" + return "organizations/{organization}".format(organization=organization,) + + @staticmethod + def parse_common_organization_path(path: str) -> Dict[str, str]: + """Parse a organization path into its component segments.""" + m = re.match(r"^organizations/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_project_path(project: str,) -> str: + """Return a fully-qualified project string.""" + return "projects/{project}".format(project=project,) + + @staticmethod + def parse_common_project_path(path: str) -> Dict[str, str]: + """Parse a project path into its component segments.""" + m = re.match(r"^projects/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_location_path(project: str, location: str,) -> str: + """Return a fully-qualified location string.""" + return "projects/{project}/locations/{location}".format( + project=project, location=location, + ) + + @staticmethod + def parse_common_location_path(path: str) -> Dict[str, str]: + """Parse a location path into its component segments.""" + m = re.match(r"^projects/(?P.+?)/locations/(?P.+?)$", path) + return m.groupdict() if m else {} + + def __init__( + self, + *, + credentials: Optional[credentials.Credentials] = None, + transport: Union[str, VideoIntelligenceServiceTransport, None] = None, + client_options: Optional[client_options_lib.ClientOptions] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the video intelligence service client. + + Args: + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + transport (Union[str, ~.VideoIntelligenceServiceTransport]): The + transport to use. If set to None, a transport is chosen + automatically. + client_options (client_options_lib.ClientOptions): Custom options for the + client. It won't take effect if a ``transport`` instance is provided. + (1) The ``api_endpoint`` property can be used to override the + default endpoint provided by the client. GOOGLE_API_USE_MTLS_ENDPOINT + environment variable can also be used to override the endpoint: + "always" (always use the default mTLS endpoint), "never" (always + use the default regular endpoint) and "auto" (auto switch to the + default mTLS endpoint if client certificate is present, this is + the default value). However, the ``api_endpoint`` property takes + precedence if provided. + (2) If GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable + is "true", then the ``client_cert_source`` property can be used + to provide client certificate for mutual TLS transport. If + not provided, the default SSL client certificate will be used if + present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not + set, no client certificate will be used. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If mutual TLS transport + creation failed for any reason. + """ + if isinstance(client_options, dict): + client_options = client_options_lib.from_dict(client_options) + if client_options is None: + client_options = client_options_lib.ClientOptions() + + # Create SSL credentials for mutual TLS if needed. + use_client_cert = bool( + util.strtobool(os.getenv("GOOGLE_API_USE_CLIENT_CERTIFICATE", "false")) + ) + + ssl_credentials = None + is_mtls = False + if use_client_cert: + if client_options.client_cert_source: + import grpc # type: ignore + + cert, key = client_options.client_cert_source() + ssl_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + is_mtls = True + else: + creds = SslCredentials() + is_mtls = creds.is_mtls + ssl_credentials = creds.ssl_credentials if is_mtls else None + + # Figure out which api endpoint to use. + if client_options.api_endpoint is not None: + api_endpoint = client_options.api_endpoint + else: + use_mtls_env = os.getenv("GOOGLE_API_USE_MTLS_ENDPOINT", "auto") + if use_mtls_env == "never": + api_endpoint = self.DEFAULT_ENDPOINT + elif use_mtls_env == "always": + api_endpoint = self.DEFAULT_MTLS_ENDPOINT + elif use_mtls_env == "auto": + api_endpoint = ( + self.DEFAULT_MTLS_ENDPOINT if is_mtls else self.DEFAULT_ENDPOINT + ) + else: + raise MutualTLSChannelError( + "Unsupported GOOGLE_API_USE_MTLS_ENDPOINT value. Accepted values: never, auto, always" + ) + + # Save or instantiate the transport. + # Ordinarily, we provide the transport, but allowing a custom transport + # instance provides an extensibility point for unusual situations. + if isinstance(transport, VideoIntelligenceServiceTransport): + # transport is a VideoIntelligenceServiceTransport instance. + if credentials or client_options.credentials_file: + raise ValueError( + "When providing a transport instance, " + "provide its credentials directly." + ) + if client_options.scopes: + raise ValueError( + "When providing a transport instance, " + "provide its scopes directly." + ) + self._transport = transport + else: + Transport = type(self).get_transport_class(transport) + self._transport = Transport( + credentials=credentials, + credentials_file=client_options.credentials_file, + host=api_endpoint, + scopes=client_options.scopes, + ssl_channel_credentials=ssl_credentials, + quota_project_id=client_options.quota_project_id, + client_info=client_info, + ) + + def annotate_video( + self, + request: video_intelligence.AnnotateVideoRequest = None, + *, + input_uri: str = None, + features: Sequence[video_intelligence.Feature] = None, + retry: retries.Retry = gapic_v1.method.DEFAULT, + timeout: float = None, + metadata: Sequence[Tuple[str, str]] = (), + ) -> operation.Operation: + r"""Performs asynchronous video annotation. Progress and results can + be retrieved through the ``google.longrunning.Operations`` + interface. ``Operation.metadata`` contains + ``AnnotateVideoProgress`` (progress). ``Operation.response`` + contains ``AnnotateVideoResponse`` (results). + + Args: + request (:class:`~.video_intelligence.AnnotateVideoRequest`): + The request object. Video annotation request. + input_uri (:class:`str`): + Input video location. Currently, only `Google Cloud + Storage `__ URIs are + supported, which must be specified in the following + format: ``gs://bucket-id/object-id`` (other URI formats + return + [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). + For more information, see `Request + URIs `__. + A video URI may include wildcards in ``object-id``, and + thus identify multiple videos. Supported wildcards: '*' + to match 0 or more characters; '?' to match 1 character. + If unset, the input video should be embedded in the + request as ``input_content``. If set, ``input_content`` + should be unset. + This corresponds to the ``input_uri`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + features (:class:`Sequence[~.video_intelligence.Feature]`): + Required. Requested video annotation + features. + This corresponds to the ``features`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, str]]): Strings which should be + sent along with the request as metadata. + + Returns: + ~.operation.Operation: + An object representing a long-running operation. + + The result type for the operation will be + :class:``~.video_intelligence.AnnotateVideoResponse``: + Video annotation response. Included in the ``response`` + field of the ``Operation`` returned by the + ``GetOperation`` call of the + ``google::longrunning::Operations`` service. + + """ + # Create or coerce a protobuf request object. + # Sanity check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + has_flattened_params = any([input_uri, features]) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # Minor optimization to avoid making a copy if the user passes + # in a video_intelligence.AnnotateVideoRequest. + # There's no risk of modifying the input as we've already verified + # there are no flattened fields. + if not isinstance(request, video_intelligence.AnnotateVideoRequest): + request = video_intelligence.AnnotateVideoRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + + if input_uri is not None: + request.input_uri = input_uri + + if features: + request.features.extend(features) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.annotate_video] + + # Send the request. + response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,) + + # Wrap the response in an operation future. + response = operation.from_gapic( + response, + self._transport.operations_client, + video_intelligence.AnnotateVideoResponse, + metadata_type=video_intelligence.AnnotateVideoProgress, + ) + + # Done; return the response. + return response + + +try: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=pkg_resources.get_distribution( + "google-cloud-videointelligence", + ).version, + ) +except pkg_resources.DistributionNotFound: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo() + + +__all__ = ("VideoIntelligenceServiceClient",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/transports/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/transports/__init__.py new file mode 100644 index 000000000000..08de302230ba --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/transports/__init__.py @@ -0,0 +1,38 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from collections import OrderedDict +from typing import Dict, Type + +from .base import VideoIntelligenceServiceTransport +from .grpc import VideoIntelligenceServiceGrpcTransport +from .grpc_asyncio import VideoIntelligenceServiceGrpcAsyncIOTransport + + +# Compile a registry of transports. +_transport_registry = ( + OrderedDict() +) # type: Dict[str, Type[VideoIntelligenceServiceTransport]] +_transport_registry["grpc"] = VideoIntelligenceServiceGrpcTransport +_transport_registry["grpc_asyncio"] = VideoIntelligenceServiceGrpcAsyncIOTransport + + +__all__ = ( + "VideoIntelligenceServiceTransport", + "VideoIntelligenceServiceGrpcTransport", + "VideoIntelligenceServiceGrpcAsyncIOTransport", +) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/transports/base.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/transports/base.py new file mode 100644 index 000000000000..0117c0f9ff14 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/transports/base.py @@ -0,0 +1,141 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import abc +import typing +import pkg_resources + +from google import auth # type: ignore +from google.api_core import exceptions # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google.api_core import retry as retries # type: ignore +from google.api_core import operations_v1 # type: ignore +from google.auth import credentials # type: ignore + +from google.cloud.videointelligence_v1p1beta1.types import video_intelligence +from google.longrunning import operations_pb2 as operations # type: ignore + + +try: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=pkg_resources.get_distribution( + "google-cloud-videointelligence", + ).version, + ) +except pkg_resources.DistributionNotFound: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo() + + +class VideoIntelligenceServiceTransport(abc.ABC): + """Abstract transport class for VideoIntelligenceService.""" + + AUTH_SCOPES = ("https://www.googleapis.com/auth/cloud-platform",) + + def __init__( + self, + *, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: typing.Optional[str] = None, + scopes: typing.Optional[typing.Sequence[str]] = AUTH_SCOPES, + quota_project_id: typing.Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + **kwargs, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): The hostname to connect to. + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is mutually exclusive with credentials. + scope (Optional[Sequence[str]]): A list of scopes. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + """ + # Save the hostname. Default to port 443 (HTTPS) if none is specified. + if ":" not in host: + host += ":443" + self._host = host + + # If no credentials are provided, then determine the appropriate + # defaults. + if credentials and credentials_file: + raise exceptions.DuplicateCredentialArgs( + "'credentials_file' and 'credentials' are mutually exclusive" + ) + + if credentials_file is not None: + credentials, _ = auth.load_credentials_from_file( + credentials_file, scopes=scopes, quota_project_id=quota_project_id + ) + + elif credentials is None: + credentials, _ = auth.default( + scopes=scopes, quota_project_id=quota_project_id + ) + + # Save the credentials. + self._credentials = credentials + + # Lifted into its own function so it can be stubbed out during tests. + self._prep_wrapped_messages(client_info) + + def _prep_wrapped_messages(self, client_info): + # Precompute the wrapped methods. + self._wrapped_methods = { + self.annotate_video: gapic_v1.method.wrap_method( + self.annotate_video, + default_retry=retries.Retry( + initial=1.0, + maximum=120.0, + multiplier=2.5, + predicate=retries.if_exception_type( + exceptions.DeadlineExceeded, exceptions.ServiceUnavailable, + ), + ), + default_timeout=600.0, + client_info=client_info, + ), + } + + @property + def operations_client(self) -> operations_v1.OperationsClient: + """Return the client designed to process long-running operations.""" + raise NotImplementedError() + + @property + def annotate_video( + self, + ) -> typing.Callable[ + [video_intelligence.AnnotateVideoRequest], + typing.Union[operations.Operation, typing.Awaitable[operations.Operation]], + ]: + raise NotImplementedError() + + +__all__ = ("VideoIntelligenceServiceTransport",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/transports/grpc.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/transports/grpc.py new file mode 100644 index 000000000000..20fb4282a139 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/transports/grpc.py @@ -0,0 +1,281 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import warnings +from typing import Callable, Dict, Optional, Sequence, Tuple + +from google.api_core import grpc_helpers # type: ignore +from google.api_core import operations_v1 # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google import auth # type: ignore +from google.auth import credentials # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore + +import grpc # type: ignore + +from google.cloud.videointelligence_v1p1beta1.types import video_intelligence +from google.longrunning import operations_pb2 as operations # type: ignore + +from .base import VideoIntelligenceServiceTransport, DEFAULT_CLIENT_INFO + + +class VideoIntelligenceServiceGrpcTransport(VideoIntelligenceServiceTransport): + """gRPC backend transport for VideoIntelligenceService. + + Service that implements Google Cloud Video Intelligence API. + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends protocol buffers over the wire using gRPC (which is built on + top of HTTP/2); the ``grpcio`` package must be installed. + """ + + _stubs: Dict[str, Callable] + + def __init__( + self, + *, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: str = None, + scopes: Sequence[str] = None, + channel: grpc.Channel = None, + api_mtls_endpoint: str = None, + client_cert_source: Callable[[], Tuple[bytes, bytes]] = None, + ssl_channel_credentials: grpc.ChannelCredentials = None, + quota_project_id: Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): The hostname to connect to. + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + This argument is ignored if ``channel`` is provided. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if ``channel`` is provided. + scopes (Optional(Sequence[str])): A list of scopes. This argument is + ignored if ``channel`` is provided. + channel (Optional[grpc.Channel]): A ``Channel`` instance through + which to make calls. + api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint. + If provided, it overrides the ``host`` argument and tries to create + a mutual TLS channel with client SSL credentials from + ``client_cert_source`` or applicatin default SSL credentials. + client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]): + Deprecated. A callback to provide client SSL certificate bytes and + private key bytes, both in PEM format. It is ignored if + ``api_mtls_endpoint`` is None. + ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials + for grpc channel. It is ignored if ``channel`` is provided. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If mutual TLS transport + creation failed for any reason. + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + self._ssl_channel_credentials = ssl_channel_credentials + + if channel: + # Sanity check: Ensure that channel and credentials are not both + # provided. + credentials = False + + # If a channel was explicitly provided, set it. + self._grpc_channel = channel + self._ssl_channel_credentials = None + elif api_mtls_endpoint: + warnings.warn( + "api_mtls_endpoint and client_cert_source are deprecated", + DeprecationWarning, + ) + + host = ( + api_mtls_endpoint + if ":" in api_mtls_endpoint + else api_mtls_endpoint + ":443" + ) + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # Create SSL credentials with client_cert_source or application + # default SSL credentials. + if client_cert_source: + cert, key = client_cert_source() + ssl_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + else: + ssl_credentials = SslCredentials().ssl_credentials + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + self._ssl_channel_credentials = ssl_credentials + else: + host = host if ":" in host else host + ":443" + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_channel_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + + self._stubs = {} # type: Dict[str, Callable] + + # Run the base constructor. + super().__init__( + host=host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + client_info=client_info, + ) + + @classmethod + def create_channel( + cls, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: str = None, + scopes: Optional[Sequence[str]] = None, + quota_project_id: Optional[str] = None, + **kwargs, + ) -> grpc.Channel: + """Create and return a gRPC channel object. + Args: + address (Optionsl[str]): The host for the channel to use. + credentials (Optional[~.Credentials]): The + authorization credentials to attach to requests. These + credentials identify this application to the service. If + none are specified, the client will attempt to ascertain + the credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is mutually exclusive with credentials. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + kwargs (Optional[dict]): Keyword arguments, which are passed to the + channel creation. + Returns: + grpc.Channel: A gRPC channel object. + + Raises: + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + scopes = scopes or cls.AUTH_SCOPES + return grpc_helpers.create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes, + quota_project_id=quota_project_id, + **kwargs, + ) + + @property + def grpc_channel(self) -> grpc.Channel: + """Return the channel designed to connect to this service. + """ + return self._grpc_channel + + @property + def operations_client(self) -> operations_v1.OperationsClient: + """Create the client designed to process long-running operations. + + This property caches on the instance; repeated calls return the same + client. + """ + # Sanity check: Only create a new client if we do not already have one. + if "operations_client" not in self.__dict__: + self.__dict__["operations_client"] = operations_v1.OperationsClient( + self.grpc_channel + ) + + # Return the client from cache. + return self.__dict__["operations_client"] + + @property + def annotate_video( + self, + ) -> Callable[[video_intelligence.AnnotateVideoRequest], operations.Operation]: + r"""Return a callable for the annotate video method over gRPC. + + Performs asynchronous video annotation. Progress and results can + be retrieved through the ``google.longrunning.Operations`` + interface. ``Operation.metadata`` contains + ``AnnotateVideoProgress`` (progress). ``Operation.response`` + contains ``AnnotateVideoResponse`` (results). + + Returns: + Callable[[~.AnnotateVideoRequest], + ~.Operation]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "annotate_video" not in self._stubs: + self._stubs["annotate_video"] = self.grpc_channel.unary_unary( + "/google.cloud.videointelligence.v1p1beta1.VideoIntelligenceService/AnnotateVideo", + request_serializer=video_intelligence.AnnotateVideoRequest.serialize, + response_deserializer=operations.Operation.FromString, + ) + return self._stubs["annotate_video"] + + +__all__ = ("VideoIntelligenceServiceGrpcTransport",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/transports/grpc_asyncio.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/transports/grpc_asyncio.py new file mode 100644 index 000000000000..b00dcf284058 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/services/video_intelligence_service/transports/grpc_asyncio.py @@ -0,0 +1,287 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import warnings +from typing import Awaitable, Callable, Dict, Optional, Sequence, Tuple + +from google.api_core import gapic_v1 # type: ignore +from google.api_core import grpc_helpers_async # type: ignore +from google.api_core import operations_v1 # type: ignore +from google import auth # type: ignore +from google.auth import credentials # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore + +import grpc # type: ignore +from grpc.experimental import aio # type: ignore + +from google.cloud.videointelligence_v1p1beta1.types import video_intelligence +from google.longrunning import operations_pb2 as operations # type: ignore + +from .base import VideoIntelligenceServiceTransport, DEFAULT_CLIENT_INFO +from .grpc import VideoIntelligenceServiceGrpcTransport + + +class VideoIntelligenceServiceGrpcAsyncIOTransport(VideoIntelligenceServiceTransport): + """gRPC AsyncIO backend transport for VideoIntelligenceService. + + Service that implements Google Cloud Video Intelligence API. + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends protocol buffers over the wire using gRPC (which is built on + top of HTTP/2); the ``grpcio`` package must be installed. + """ + + _grpc_channel: aio.Channel + _stubs: Dict[str, Callable] = {} + + @classmethod + def create_channel( + cls, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + quota_project_id: Optional[str] = None, + **kwargs, + ) -> aio.Channel: + """Create and return a gRPC AsyncIO channel object. + Args: + address (Optional[str]): The host for the channel to use. + credentials (Optional[~.Credentials]): The + authorization credentials to attach to requests. These + credentials identify this application to the service. If + none are specified, the client will attempt to ascertain + the credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if ``channel`` is provided. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + kwargs (Optional[dict]): Keyword arguments, which are passed to the + channel creation. + Returns: + aio.Channel: A gRPC AsyncIO channel object. + """ + scopes = scopes or cls.AUTH_SCOPES + return grpc_helpers_async.create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes, + quota_project_id=quota_project_id, + **kwargs, + ) + + def __init__( + self, + *, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + channel: aio.Channel = None, + api_mtls_endpoint: str = None, + client_cert_source: Callable[[], Tuple[bytes, bytes]] = None, + ssl_channel_credentials: grpc.ChannelCredentials = None, + quota_project_id=None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): The hostname to connect to. + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + This argument is ignored if ``channel`` is provided. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if ``channel`` is provided. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + channel (Optional[aio.Channel]): A ``Channel`` instance through + which to make calls. + api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint. + If provided, it overrides the ``host`` argument and tries to create + a mutual TLS channel with client SSL credentials from + ``client_cert_source`` or applicatin default SSL credentials. + client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]): + Deprecated. A callback to provide client SSL certificate bytes and + private key bytes, both in PEM format. It is ignored if + ``api_mtls_endpoint`` is None. + ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials + for grpc channel. It is ignored if ``channel`` is provided. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport + creation failed for any reason. + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + self._ssl_channel_credentials = ssl_channel_credentials + + if channel: + # Sanity check: Ensure that channel and credentials are not both + # provided. + credentials = False + + # If a channel was explicitly provided, set it. + self._grpc_channel = channel + self._ssl_channel_credentials = None + elif api_mtls_endpoint: + warnings.warn( + "api_mtls_endpoint and client_cert_source are deprecated", + DeprecationWarning, + ) + + host = ( + api_mtls_endpoint + if ":" in api_mtls_endpoint + else api_mtls_endpoint + ":443" + ) + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # Create SSL credentials with client_cert_source or application + # default SSL credentials. + if client_cert_source: + cert, key = client_cert_source() + ssl_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + else: + ssl_credentials = SslCredentials().ssl_credentials + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + self._ssl_channel_credentials = ssl_credentials + else: + host = host if ":" in host else host + ":443" + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_channel_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + + # Run the base constructor. + super().__init__( + host=host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + client_info=client_info, + ) + + self._stubs = {} + + @property + def grpc_channel(self) -> aio.Channel: + """Create the channel designed to connect to this service. + + This property caches on the instance; repeated calls return + the same channel. + """ + # Return the channel from cache. + return self._grpc_channel + + @property + def operations_client(self) -> operations_v1.OperationsAsyncClient: + """Create the client designed to process long-running operations. + + This property caches on the instance; repeated calls return the same + client. + """ + # Sanity check: Only create a new client if we do not already have one. + if "operations_client" not in self.__dict__: + self.__dict__["operations_client"] = operations_v1.OperationsAsyncClient( + self.grpc_channel + ) + + # Return the client from cache. + return self.__dict__["operations_client"] + + @property + def annotate_video( + self, + ) -> Callable[ + [video_intelligence.AnnotateVideoRequest], Awaitable[operations.Operation] + ]: + r"""Return a callable for the annotate video method over gRPC. + + Performs asynchronous video annotation. Progress and results can + be retrieved through the ``google.longrunning.Operations`` + interface. ``Operation.metadata`` contains + ``AnnotateVideoProgress`` (progress). ``Operation.response`` + contains ``AnnotateVideoResponse`` (results). + + Returns: + Callable[[~.AnnotateVideoRequest], + Awaitable[~.Operation]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "annotate_video" not in self._stubs: + self._stubs["annotate_video"] = self.grpc_channel.unary_unary( + "/google.cloud.videointelligence.v1p1beta1.VideoIntelligenceService/AnnotateVideo", + request_serializer=video_intelligence.AnnotateVideoRequest.serialize, + response_deserializer=operations.Operation.FromString, + ) + return self._stubs["annotate_video"] + + +__all__ = ("VideoIntelligenceServiceGrpcAsyncIOTransport",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/types.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/types.py deleted file mode 100644 index a9acbd45a863..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/types.py +++ /dev/null @@ -1,56 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from __future__ import absolute_import -import sys - -from google.api_core.protobuf_helpers import get_messages - -from google.cloud.videointelligence_v1p1beta1.proto import video_intelligence_pb2 -from google.longrunning import operations_pb2 -from google.protobuf import any_pb2 -from google.protobuf import duration_pb2 -from google.protobuf import timestamp_pb2 -from google.rpc import status_pb2 - - -_shared_modules = [ - operations_pb2, - any_pb2, - duration_pb2, - timestamp_pb2, - status_pb2, -] - -_local_modules = [ - video_intelligence_pb2, -] - -names = [] - -for module in _shared_modules: # pragma: NO COVER - for name, message in get_messages(module).items(): - setattr(sys.modules[__name__], name, message) - names.append(name) -for module in _local_modules: - for name, message in get_messages(module).items(): - message.__module__ = "google.cloud.videointelligence_v1p1beta1.types" - setattr(sys.modules[__name__], name, message) - names.append(name) - - -__all__ = tuple(sorted(names)) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/types/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/types/__init__.py new file mode 100644 index 000000000000..54948c48b341 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/types/__init__.py @@ -0,0 +1,65 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from .video_intelligence import ( + AnnotateVideoRequest, + VideoContext, + LabelDetectionConfig, + ShotChangeDetectionConfig, + ExplicitContentDetectionConfig, + VideoSegment, + LabelSegment, + LabelFrame, + Entity, + LabelAnnotation, + ExplicitContentFrame, + ExplicitContentAnnotation, + VideoAnnotationResults, + AnnotateVideoResponse, + VideoAnnotationProgress, + AnnotateVideoProgress, + SpeechTranscriptionConfig, + SpeechContext, + SpeechTranscription, + SpeechRecognitionAlternative, + WordInfo, +) + + +__all__ = ( + "AnnotateVideoRequest", + "VideoContext", + "LabelDetectionConfig", + "ShotChangeDetectionConfig", + "ExplicitContentDetectionConfig", + "VideoSegment", + "LabelSegment", + "LabelFrame", + "Entity", + "LabelAnnotation", + "ExplicitContentFrame", + "ExplicitContentAnnotation", + "VideoAnnotationResults", + "AnnotateVideoResponse", + "VideoAnnotationProgress", + "AnnotateVideoProgress", + "SpeechTranscriptionConfig", + "SpeechContext", + "SpeechTranscription", + "SpeechRecognitionAlternative", + "WordInfo", +) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/types/video_intelligence.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/types/video_intelligence.py new file mode 100644 index 000000000000..c766ccc87594 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p1beta1/types/video_intelligence.py @@ -0,0 +1,636 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import proto # type: ignore + + +from google.protobuf import duration_pb2 as duration # type: ignore +from google.protobuf import timestamp_pb2 as timestamp # type: ignore +from google.rpc import status_pb2 as status # type: ignore + + +__protobuf__ = proto.module( + package="google.cloud.videointelligence.v1p1beta1", + manifest={ + "Feature", + "LabelDetectionMode", + "Likelihood", + "AnnotateVideoRequest", + "VideoContext", + "LabelDetectionConfig", + "ShotChangeDetectionConfig", + "ExplicitContentDetectionConfig", + "VideoSegment", + "LabelSegment", + "LabelFrame", + "Entity", + "LabelAnnotation", + "ExplicitContentFrame", + "ExplicitContentAnnotation", + "VideoAnnotationResults", + "AnnotateVideoResponse", + "VideoAnnotationProgress", + "AnnotateVideoProgress", + "SpeechTranscriptionConfig", + "SpeechContext", + "SpeechTranscription", + "SpeechRecognitionAlternative", + "WordInfo", + }, +) + + +class Feature(proto.Enum): + r"""Video annotation feature.""" + FEATURE_UNSPECIFIED = 0 + LABEL_DETECTION = 1 + SHOT_CHANGE_DETECTION = 2 + EXPLICIT_CONTENT_DETECTION = 3 + SPEECH_TRANSCRIPTION = 6 + + +class LabelDetectionMode(proto.Enum): + r"""Label detection mode.""" + LABEL_DETECTION_MODE_UNSPECIFIED = 0 + SHOT_MODE = 1 + FRAME_MODE = 2 + SHOT_AND_FRAME_MODE = 3 + + +class Likelihood(proto.Enum): + r"""Bucketized representation of likelihood.""" + LIKELIHOOD_UNSPECIFIED = 0 + VERY_UNLIKELY = 1 + UNLIKELY = 2 + POSSIBLE = 3 + LIKELY = 4 + VERY_LIKELY = 5 + + +class AnnotateVideoRequest(proto.Message): + r"""Video annotation request. + + Attributes: + input_uri (str): + Input video location. Currently, only `Google Cloud + Storage `__ URIs are + supported, which must be specified in the following format: + ``gs://bucket-id/object-id`` (other URI formats return + [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). + For more information, see `Request + URIs `__. + A video URI may include wildcards in ``object-id``, and thus + identify multiple videos. Supported wildcards: '*' to match + 0 or more characters; '?' to match 1 character. If unset, + the input video should be embedded in the request as + ``input_content``. If set, ``input_content`` should be + unset. + input_content (bytes): + The video data bytes. If unset, the input video(s) should be + specified via ``input_uri``. If set, ``input_uri`` should be + unset. + features (Sequence[~.video_intelligence.Feature]): + Required. Requested video annotation + features. + video_context (~.video_intelligence.VideoContext): + Additional video context and/or feature- + pecific parameters. + output_uri (str): + Optional. Location where the output (in JSON format) should + be stored. Currently, only `Google Cloud + Storage `__ URIs are + supported, which must be specified in the following format: + ``gs://bucket-id/object-id`` (other URI formats return + [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). + For more information, see `Request + URIs `__. + location_id (str): + Optional. Cloud region where annotation should take place. + Supported cloud regions: ``us-east1``, ``us-west1``, + ``europe-west1``, ``asia-east1``. If no region is specified, + a region will be determined based on video file location. + """ + + input_uri = proto.Field(proto.STRING, number=1) + + input_content = proto.Field(proto.BYTES, number=6) + + features = proto.RepeatedField(proto.ENUM, number=2, enum="Feature",) + + video_context = proto.Field(proto.MESSAGE, number=3, message="VideoContext",) + + output_uri = proto.Field(proto.STRING, number=4) + + location_id = proto.Field(proto.STRING, number=5) + + +class VideoContext(proto.Message): + r"""Video context and/or feature-specific parameters. + + Attributes: + segments (Sequence[~.video_intelligence.VideoSegment]): + Video segments to annotate. The segments may + overlap and are not required to be contiguous or + span the whole video. If unspecified, each video + is treated as a single segment. + label_detection_config (~.video_intelligence.LabelDetectionConfig): + Config for LABEL_DETECTION. + shot_change_detection_config (~.video_intelligence.ShotChangeDetectionConfig): + Config for SHOT_CHANGE_DETECTION. + explicit_content_detection_config (~.video_intelligence.ExplicitContentDetectionConfig): + Config for EXPLICIT_CONTENT_DETECTION. + speech_transcription_config (~.video_intelligence.SpeechTranscriptionConfig): + Config for SPEECH_TRANSCRIPTION. + """ + + segments = proto.RepeatedField(proto.MESSAGE, number=1, message="VideoSegment",) + + label_detection_config = proto.Field( + proto.MESSAGE, number=2, message="LabelDetectionConfig", + ) + + shot_change_detection_config = proto.Field( + proto.MESSAGE, number=3, message="ShotChangeDetectionConfig", + ) + + explicit_content_detection_config = proto.Field( + proto.MESSAGE, number=4, message="ExplicitContentDetectionConfig", + ) + + speech_transcription_config = proto.Field( + proto.MESSAGE, number=6, message="SpeechTranscriptionConfig", + ) + + +class LabelDetectionConfig(proto.Message): + r"""Config for LABEL_DETECTION. + + Attributes: + label_detection_mode (~.video_intelligence.LabelDetectionMode): + What labels should be detected with LABEL_DETECTION, in + addition to video-level labels or segment-level labels. If + unspecified, defaults to ``SHOT_MODE``. + stationary_camera (bool): + Whether the video has been shot from a stationary (i.e. + non-moving) camera. When set to true, might improve + detection accuracy for moving objects. Should be used with + ``SHOT_AND_FRAME_MODE`` enabled. + model (str): + Model to use for label detection. + Supported values: "builtin/stable" (the default + if unset) and "builtin/latest". + """ + + label_detection_mode = proto.Field(proto.ENUM, number=1, enum="LabelDetectionMode",) + + stationary_camera = proto.Field(proto.BOOL, number=2) + + model = proto.Field(proto.STRING, number=3) + + +class ShotChangeDetectionConfig(proto.Message): + r"""Config for SHOT_CHANGE_DETECTION. + + Attributes: + model (str): + Model to use for shot change detection. + Supported values: "builtin/stable" (the default + if unset) and "builtin/latest". + """ + + model = proto.Field(proto.STRING, number=1) + + +class ExplicitContentDetectionConfig(proto.Message): + r"""Config for EXPLICIT_CONTENT_DETECTION. + + Attributes: + model (str): + Model to use for explicit content detection. + Supported values: "builtin/stable" (the default + if unset) and "builtin/latest". + """ + + model = proto.Field(proto.STRING, number=1) + + +class VideoSegment(proto.Message): + r"""Video segment. + + Attributes: + start_time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the start of the segment + (inclusive). + end_time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the end of the segment + (inclusive). + """ + + start_time_offset = proto.Field(proto.MESSAGE, number=1, message=duration.Duration,) + + end_time_offset = proto.Field(proto.MESSAGE, number=2, message=duration.Duration,) + + +class LabelSegment(proto.Message): + r"""Video segment level annotation results for label detection. + + Attributes: + segment (~.video_intelligence.VideoSegment): + Video segment where a label was detected. + confidence (float): + Confidence that the label is accurate. Range: [0, 1]. + """ + + segment = proto.Field(proto.MESSAGE, number=1, message="VideoSegment",) + + confidence = proto.Field(proto.FLOAT, number=2) + + +class LabelFrame(proto.Message): + r"""Video frame level annotation results for label detection. + + Attributes: + time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the video frame for this + location. + confidence (float): + Confidence that the label is accurate. Range: [0, 1]. + """ + + time_offset = proto.Field(proto.MESSAGE, number=1, message=duration.Duration,) + + confidence = proto.Field(proto.FLOAT, number=2) + + +class Entity(proto.Message): + r"""Detected entity from video analysis. + + Attributes: + entity_id (str): + Opaque entity ID. Some IDs may be available in `Google + Knowledge Graph Search + API `__. + description (str): + Textual description, e.g. ``Fixed-gear bicycle``. + language_code (str): + Language code for ``description`` in BCP-47 format. + """ + + entity_id = proto.Field(proto.STRING, number=1) + + description = proto.Field(proto.STRING, number=2) + + language_code = proto.Field(proto.STRING, number=3) + + +class LabelAnnotation(proto.Message): + r"""Label annotation. + + Attributes: + entity (~.video_intelligence.Entity): + Detected entity. + category_entities (Sequence[~.video_intelligence.Entity]): + Common categories for the detected entity. E.g. when the + label is ``Terrier`` the category is likely ``dog``. And in + some cases there might be more than one categories e.g. + ``Terrier`` could also be a ``pet``. + segments (Sequence[~.video_intelligence.LabelSegment]): + All video segments where a label was + detected. + frames (Sequence[~.video_intelligence.LabelFrame]): + All video frames where a label was detected. + """ + + entity = proto.Field(proto.MESSAGE, number=1, message="Entity",) + + category_entities = proto.RepeatedField(proto.MESSAGE, number=2, message="Entity",) + + segments = proto.RepeatedField(proto.MESSAGE, number=3, message="LabelSegment",) + + frames = proto.RepeatedField(proto.MESSAGE, number=4, message="LabelFrame",) + + +class ExplicitContentFrame(proto.Message): + r"""Video frame level annotation results for explicit content. + + Attributes: + time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the video frame for this + location. + pornography_likelihood (~.video_intelligence.Likelihood): + Likelihood of the pornography content.. + """ + + time_offset = proto.Field(proto.MESSAGE, number=1, message=duration.Duration,) + + pornography_likelihood = proto.Field(proto.ENUM, number=2, enum="Likelihood",) + + +class ExplicitContentAnnotation(proto.Message): + r"""Explicit content annotation (based on per-frame visual + signals only). If no explicit content has been detected in a + frame, no annotations are present for that frame. + + Attributes: + frames (Sequence[~.video_intelligence.ExplicitContentFrame]): + All video frames where explicit content was + detected. + """ + + frames = proto.RepeatedField( + proto.MESSAGE, number=1, message="ExplicitContentFrame", + ) + + +class VideoAnnotationResults(proto.Message): + r"""Annotation results for a single video. + + Attributes: + input_uri (str): + Output only. Video file location in `Google Cloud + Storage `__. + segment_label_annotations (Sequence[~.video_intelligence.LabelAnnotation]): + Label annotations on video level or user + specified segment level. There is exactly one + element for each unique label. + shot_label_annotations (Sequence[~.video_intelligence.LabelAnnotation]): + Label annotations on shot level. + There is exactly one element for each unique + label. + frame_label_annotations (Sequence[~.video_intelligence.LabelAnnotation]): + Label annotations on frame level. + There is exactly one element for each unique + label. + shot_annotations (Sequence[~.video_intelligence.VideoSegment]): + Shot annotations. Each shot is represented as + a video segment. + explicit_annotation (~.video_intelligence.ExplicitContentAnnotation): + Explicit content annotation. + speech_transcriptions (Sequence[~.video_intelligence.SpeechTranscription]): + Speech transcription. + error (~.status.Status): + Output only. If set, indicates an error. Note that for a + single ``AnnotateVideoRequest`` some videos may succeed and + some may fail. + """ + + input_uri = proto.Field(proto.STRING, number=1) + + segment_label_annotations = proto.RepeatedField( + proto.MESSAGE, number=2, message="LabelAnnotation", + ) + + shot_label_annotations = proto.RepeatedField( + proto.MESSAGE, number=3, message="LabelAnnotation", + ) + + frame_label_annotations = proto.RepeatedField( + proto.MESSAGE, number=4, message="LabelAnnotation", + ) + + shot_annotations = proto.RepeatedField( + proto.MESSAGE, number=6, message="VideoSegment", + ) + + explicit_annotation = proto.Field( + proto.MESSAGE, number=7, message="ExplicitContentAnnotation", + ) + + speech_transcriptions = proto.RepeatedField( + proto.MESSAGE, number=11, message="SpeechTranscription", + ) + + error = proto.Field(proto.MESSAGE, number=9, message=status.Status,) + + +class AnnotateVideoResponse(proto.Message): + r"""Video annotation response. Included in the ``response`` field of the + ``Operation`` returned by the ``GetOperation`` call of the + ``google::longrunning::Operations`` service. + + Attributes: + annotation_results (Sequence[~.video_intelligence.VideoAnnotationResults]): + Annotation results for all videos specified in + ``AnnotateVideoRequest``. + """ + + annotation_results = proto.RepeatedField( + proto.MESSAGE, number=1, message="VideoAnnotationResults", + ) + + +class VideoAnnotationProgress(proto.Message): + r"""Annotation progress for a single video. + + Attributes: + input_uri (str): + Output only. Video file location in `Google Cloud + Storage `__. + progress_percent (int): + Output only. Approximate percentage processed + thus far. Guaranteed to be 100 when fully + processed. + start_time (~.timestamp.Timestamp): + Output only. Time when the request was + received. + update_time (~.timestamp.Timestamp): + Output only. Time of the most recent update. + """ + + input_uri = proto.Field(proto.STRING, number=1) + + progress_percent = proto.Field(proto.INT32, number=2) + + start_time = proto.Field(proto.MESSAGE, number=3, message=timestamp.Timestamp,) + + update_time = proto.Field(proto.MESSAGE, number=4, message=timestamp.Timestamp,) + + +class AnnotateVideoProgress(proto.Message): + r"""Video annotation progress. Included in the ``metadata`` field of the + ``Operation`` returned by the ``GetOperation`` call of the + ``google::longrunning::Operations`` service. + + Attributes: + annotation_progress (Sequence[~.video_intelligence.VideoAnnotationProgress]): + Progress metadata for all videos specified in + ``AnnotateVideoRequest``. + """ + + annotation_progress = proto.RepeatedField( + proto.MESSAGE, number=1, message="VideoAnnotationProgress", + ) + + +class SpeechTranscriptionConfig(proto.Message): + r"""Config for SPEECH_TRANSCRIPTION. + + Attributes: + language_code (str): + Required. *Required* The language of the supplied audio as a + `BCP-47 `__ + language tag. Example: "en-US". See `Language + Support `__ + for a list of the currently supported language codes. + max_alternatives (int): + Optional. Maximum number of recognition hypotheses to be + returned. Specifically, the maximum number of + ``SpeechRecognitionAlternative`` messages within each + ``SpeechTranscription``. The server may return fewer than + ``max_alternatives``. Valid values are ``0``-``30``. A value + of ``0`` or ``1`` will return a maximum of one. If omitted, + will return a maximum of one. + filter_profanity (bool): + Optional. If set to ``true``, the server will attempt to + filter out profanities, replacing all but the initial + character in each filtered word with asterisks, e.g. "f***". + If set to ``false`` or omitted, profanities won't be + filtered out. + speech_contexts (Sequence[~.video_intelligence.SpeechContext]): + Optional. A means to provide context to + assist the speech recognition. + enable_automatic_punctuation (bool): + Optional. If 'true', adds punctuation to + recognition result hypotheses. This feature is + only available in select languages. Setting this + for requests in other languages has no effect at + all. The default 'false' value does not add + punctuation to result hypotheses. NOTE: "This is + currently offered as an experimental service, + complimentary to all users. In the future this + may be exclusively available as a premium + feature.". + audio_tracks (Sequence[int]): + Optional. For file formats, such as MXF or + MKV, supporting multiple audio tracks, specify + up to two tracks. Default: track 0. + """ + + language_code = proto.Field(proto.STRING, number=1) + + max_alternatives = proto.Field(proto.INT32, number=2) + + filter_profanity = proto.Field(proto.BOOL, number=3) + + speech_contexts = proto.RepeatedField( + proto.MESSAGE, number=4, message="SpeechContext", + ) + + enable_automatic_punctuation = proto.Field(proto.BOOL, number=5) + + audio_tracks = proto.RepeatedField(proto.INT32, number=6) + + +class SpeechContext(proto.Message): + r"""Provides "hints" to the speech recognizer to favor specific + words and phrases in the results. + + Attributes: + phrases (Sequence[str]): + Optional. A list of strings containing words and phrases + "hints" so that the speech recognition is more likely to + recognize them. This can be used to improve the accuracy for + specific words and phrases, for example, if specific + commands are typically spoken by the user. This can also be + used to add additional words to the vocabulary of the + recognizer. See `usage + limits `__. + """ + + phrases = proto.RepeatedField(proto.STRING, number=1) + + +class SpeechTranscription(proto.Message): + r"""A speech recognition result corresponding to a portion of the + audio. + + Attributes: + alternatives (Sequence[~.video_intelligence.SpeechRecognitionAlternative]): + May contain one or more recognition hypotheses (up to the + maximum specified in ``max_alternatives``). These + alternatives are ordered in terms of accuracy, with the top + (first) alternative being the most probable, as ranked by + the recognizer. + """ + + alternatives = proto.RepeatedField( + proto.MESSAGE, number=1, message="SpeechRecognitionAlternative", + ) + + +class SpeechRecognitionAlternative(proto.Message): + r"""Alternative hypotheses (a.k.a. n-best list). + + Attributes: + transcript (str): + Output only. Transcript text representing the + words that the user spoke. + confidence (float): + Output only. The confidence estimate between 0.0 and 1.0. A + higher number indicates an estimated greater likelihood that + the recognized words are correct. This field is set only for + the top alternative. This field is not guaranteed to be + accurate and users should not rely on it to be always + provided. The default of 0.0 is a sentinel value indicating + ``confidence`` was not set. + words (Sequence[~.video_intelligence.WordInfo]): + Output only. A list of word-specific + information for each recognized word. + """ + + transcript = proto.Field(proto.STRING, number=1) + + confidence = proto.Field(proto.FLOAT, number=2) + + words = proto.RepeatedField(proto.MESSAGE, number=3, message="WordInfo",) + + +class WordInfo(proto.Message): + r"""Word-specific information for recognized words. Word information is + only included in the response when certain request parameters are + set, such as ``enable_word_time_offsets``. + + Attributes: + start_time (~.duration.Duration): + Output only. Time offset relative to the beginning of the + audio, and corresponding to the start of the spoken word. + This field is only set if ``enable_word_time_offsets=true`` + and only in the top hypothesis. This is an experimental + feature and the accuracy of the time offset can vary. + end_time (~.duration.Duration): + Output only. Time offset relative to the beginning of the + audio, and corresponding to the end of the spoken word. This + field is only set if ``enable_word_time_offsets=true`` and + only in the top hypothesis. This is an experimental feature + and the accuracy of the time offset can vary. + word (str): + Output only. The word corresponding to this + set of information. + """ + + start_time = proto.Field(proto.MESSAGE, number=1, message=duration.Duration,) + + end_time = proto.Field(proto.MESSAGE, number=2, message=duration.Duration,) + + word = proto.Field(proto.STRING, number=3) + + +__all__ = tuple(sorted(__protobuf__.manifest)) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/__init__.py index a6cfe01b0267..8b821c7dff94 100644 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/__init__.py +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/__init__.py @@ -1,49 +1,79 @@ # -*- coding: utf-8 -*- -# + # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # -# https://www.apache.org/licenses/LICENSE-2.0 +# http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. +# - -from __future__ import absolute_import -import sys -import warnings - -from google.cloud.videointelligence_v1p2beta1 import types -from google.cloud.videointelligence_v1p2beta1.gapic import enums -from google.cloud.videointelligence_v1p2beta1.gapic import ( - video_intelligence_service_client, -) - - -if sys.version_info[:2] == (2, 7): - message = ( - "A future version of this library will drop support for Python 2.7. " - "More details about Python 2 support for Google Cloud Client Libraries " - "can be found at https://cloud.google.com/python/docs/python2-sunset/" - ) - warnings.warn(message, DeprecationWarning) - - -class VideoIntelligenceServiceClient( - video_intelligence_service_client.VideoIntelligenceServiceClient -): - __doc__ = video_intelligence_service_client.VideoIntelligenceServiceClient.__doc__ - enums = enums +from .services.video_intelligence_service import VideoIntelligenceServiceClient +from .types.video_intelligence import AnnotateVideoProgress +from .types.video_intelligence import AnnotateVideoRequest +from .types.video_intelligence import AnnotateVideoResponse +from .types.video_intelligence import Entity +from .types.video_intelligence import ExplicitContentAnnotation +from .types.video_intelligence import ExplicitContentDetectionConfig +from .types.video_intelligence import ExplicitContentFrame +from .types.video_intelligence import Feature +from .types.video_intelligence import LabelAnnotation +from .types.video_intelligence import LabelDetectionConfig +from .types.video_intelligence import LabelDetectionMode +from .types.video_intelligence import LabelFrame +from .types.video_intelligence import LabelSegment +from .types.video_intelligence import Likelihood +from .types.video_intelligence import NormalizedBoundingBox +from .types.video_intelligence import NormalizedBoundingPoly +from .types.video_intelligence import NormalizedVertex +from .types.video_intelligence import ObjectTrackingAnnotation +from .types.video_intelligence import ObjectTrackingFrame +from .types.video_intelligence import ShotChangeDetectionConfig +from .types.video_intelligence import TextAnnotation +from .types.video_intelligence import TextDetectionConfig +from .types.video_intelligence import TextFrame +from .types.video_intelligence import TextSegment +from .types.video_intelligence import VideoAnnotationProgress +from .types.video_intelligence import VideoAnnotationResults +from .types.video_intelligence import VideoContext +from .types.video_intelligence import VideoSegment __all__ = ( - "enums", - "types", + "AnnotateVideoProgress", + "AnnotateVideoRequest", + "AnnotateVideoResponse", + "Entity", + "ExplicitContentAnnotation", + "ExplicitContentDetectionConfig", + "ExplicitContentFrame", + "Feature", + "LabelAnnotation", + "LabelDetectionConfig", + "LabelDetectionMode", + "LabelFrame", + "LabelSegment", + "Likelihood", + "NormalizedBoundingBox", + "NormalizedBoundingPoly", + "NormalizedVertex", + "ObjectTrackingAnnotation", + "ObjectTrackingFrame", + "ShotChangeDetectionConfig", + "TextAnnotation", + "TextDetectionConfig", + "TextFrame", + "TextSegment", + "VideoAnnotationProgress", + "VideoAnnotationResults", + "VideoContext", + "VideoSegment", "VideoIntelligenceServiceClient", ) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/gapic/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/gapic/__init__.py deleted file mode 100644 index e69de29bb2d1..000000000000 diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/gapic/enums.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/gapic/enums.py deleted file mode 100644 index 71ecdf88395f..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/gapic/enums.py +++ /dev/null @@ -1,78 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Wrappers for protocol buffer enum types.""" - -import enum - - -class Feature(enum.IntEnum): - """ - Video annotation feature. - - Attributes: - FEATURE_UNSPECIFIED (int): Unspecified. - LABEL_DETECTION (int): Label detection. Detect objects, such as dog or flower. - SHOT_CHANGE_DETECTION (int): Shot change detection. - EXPLICIT_CONTENT_DETECTION (int): Explicit content detection. - TEXT_DETECTION (int): OCR text detection and tracking. - OBJECT_TRACKING (int): Object detection and tracking. - """ - - FEATURE_UNSPECIFIED = 0 - LABEL_DETECTION = 1 - SHOT_CHANGE_DETECTION = 2 - EXPLICIT_CONTENT_DETECTION = 3 - TEXT_DETECTION = 7 - OBJECT_TRACKING = 9 - - -class LabelDetectionMode(enum.IntEnum): - """ - Label detection mode. - - Attributes: - LABEL_DETECTION_MODE_UNSPECIFIED (int): Unspecified. - SHOT_MODE (int): Detect shot-level labels. - FRAME_MODE (int): Detect frame-level labels. - SHOT_AND_FRAME_MODE (int): Detect both shot-level and frame-level labels. - """ - - LABEL_DETECTION_MODE_UNSPECIFIED = 0 - SHOT_MODE = 1 - FRAME_MODE = 2 - SHOT_AND_FRAME_MODE = 3 - - -class Likelihood(enum.IntEnum): - """ - Bucketized representation of likelihood. - - Attributes: - LIKELIHOOD_UNSPECIFIED (int): Unspecified likelihood. - VERY_UNLIKELY (int): Very unlikely. - UNLIKELY (int): Unlikely. - POSSIBLE (int): Possible. - LIKELY (int): Likely. - VERY_LIKELY (int): Very likely. - """ - - LIKELIHOOD_UNSPECIFIED = 0 - VERY_UNLIKELY = 1 - UNLIKELY = 2 - POSSIBLE = 3 - LIKELY = 4 - VERY_LIKELY = 5 diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/gapic/transports/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/gapic/transports/__init__.py deleted file mode 100644 index e69de29bb2d1..000000000000 diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/gapic/transports/video_intelligence_service_grpc_transport.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/gapic/transports/video_intelligence_service_grpc_transport.py deleted file mode 100644 index 06c95853fe44..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/gapic/transports/video_intelligence_service_grpc_transport.py +++ /dev/null @@ -1,137 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import google.api_core.grpc_helpers -import google.api_core.operations_v1 - -from google.cloud.videointelligence_v1p2beta1.proto import video_intelligence_pb2_grpc - - -class VideoIntelligenceServiceGrpcTransport(object): - """gRPC transport class providing stubs for - google.cloud.videointelligence.v1p2beta1 VideoIntelligenceService API. - - The transport provides access to the raw gRPC stubs, - which can be used to take advantage of advanced - features of gRPC. - """ - - # The scopes needed to make gRPC calls to all of the methods defined - # in this service. - _OAUTH_SCOPES = ("https://www.googleapis.com/auth/cloud-platform",) - - def __init__( - self, - channel=None, - credentials=None, - address="videointelligence.googleapis.com:443", - ): - """Instantiate the transport class. - - Args: - channel (grpc.Channel): A ``Channel`` instance through - which to make calls. This argument is mutually exclusive - with ``credentials``; providing both will raise an exception. - credentials (google.auth.credentials.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If none - are specified, the client will attempt to ascertain the - credentials from the environment. - address (str): The address where the service is hosted. - """ - # If both `channel` and `credentials` are specified, raise an - # exception (channels come with credentials baked in already). - if channel is not None and credentials is not None: - raise ValueError( - "The `channel` and `credentials` arguments are mutually " "exclusive.", - ) - - # Create the channel. - if channel is None: - channel = self.create_channel( - address=address, - credentials=credentials, - options={ - "grpc.max_send_message_length": -1, - "grpc.max_receive_message_length": -1, - }.items(), - ) - - self._channel = channel - - # gRPC uses objects called "stubs" that are bound to the - # channel and provide a basic method for each RPC. - self._stubs = { - "video_intelligence_service_stub": video_intelligence_pb2_grpc.VideoIntelligenceServiceStub( - channel - ), - } - - # Because this API includes a method that returns a - # long-running operation (proto: google.longrunning.Operation), - # instantiate an LRO client. - self._operations_client = google.api_core.operations_v1.OperationsClient( - channel - ) - - @classmethod - def create_channel( - cls, address="videointelligence.googleapis.com:443", credentials=None, **kwargs - ): - """Create and return a gRPC channel object. - - Args: - address (str): The host for the channel to use. - credentials (~.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If - none are specified, the client will attempt to ascertain - the credentials from the environment. - kwargs (dict): Keyword arguments, which are passed to the - channel creation. - - Returns: - grpc.Channel: A gRPC channel object. - """ - return google.api_core.grpc_helpers.create_channel( - address, credentials=credentials, scopes=cls._OAUTH_SCOPES, **kwargs - ) - - @property - def channel(self): - """The gRPC channel used by the transport. - - Returns: - grpc.Channel: A gRPC channel object. - """ - return self._channel - - @property - def annotate_video(self): - """Return the gRPC stub for :meth:`VideoIntelligenceServiceClient.annotate_video`. - - Performs asynchronous video annotation. Progress and results can be - retrieved through the ``google.longrunning.Operations`` interface. - ``Operation.metadata`` contains ``AnnotateVideoProgress`` (progress). - ``Operation.response`` contains ``AnnotateVideoResponse`` (results). - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["video_intelligence_service_stub"].AnnotateVideo diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/gapic/video_intelligence_service_client.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/gapic/video_intelligence_service_client.py deleted file mode 100644 index 915a54c444cb..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/gapic/video_intelligence_service_client.py +++ /dev/null @@ -1,309 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Accesses the google.cloud.videointelligence.v1p2beta1 VideoIntelligenceService API.""" - -import pkg_resources -import warnings - -from google.oauth2 import service_account -import google.api_core.client_options -import google.api_core.gapic_v1.client_info -import google.api_core.gapic_v1.config -import google.api_core.gapic_v1.method -import google.api_core.grpc_helpers -import google.api_core.operation -import google.api_core.operations_v1 -import grpc - -from google.cloud.videointelligence_v1p2beta1.gapic import enums -from google.cloud.videointelligence_v1p2beta1.gapic import ( - video_intelligence_service_client_config, -) -from google.cloud.videointelligence_v1p2beta1.gapic.transports import ( - video_intelligence_service_grpc_transport, -) -from google.cloud.videointelligence_v1p2beta1.proto import video_intelligence_pb2 -from google.cloud.videointelligence_v1p2beta1.proto import video_intelligence_pb2_grpc -from google.longrunning import operations_pb2 - - -_GAPIC_LIBRARY_VERSION = pkg_resources.get_distribution( - "google-cloud-videointelligence", -).version - - -class VideoIntelligenceServiceClient(object): - """Service that implements Google Cloud Video Intelligence API.""" - - SERVICE_ADDRESS = "videointelligence.googleapis.com:443" - """The default address of the service.""" - - # The name of the interface for this client. This is the key used to - # find the method configuration in the client_config dictionary. - _INTERFACE_NAME = ( - "google.cloud.videointelligence.v1p2beta1.VideoIntelligenceService" - ) - - @classmethod - def from_service_account_file(cls, filename, *args, **kwargs): - """Creates an instance of this client using the provided credentials - file. - - Args: - filename (str): The path to the service account private key json - file. - args: Additional arguments to pass to the constructor. - kwargs: Additional arguments to pass to the constructor. - - Returns: - VideoIntelligenceServiceClient: The constructed client. - """ - credentials = service_account.Credentials.from_service_account_file(filename) - kwargs["credentials"] = credentials - return cls(*args, **kwargs) - - from_service_account_json = from_service_account_file - - def __init__( - self, - transport=None, - channel=None, - credentials=None, - client_config=None, - client_info=None, - client_options=None, - ): - """Constructor. - - Args: - transport (Union[~.VideoIntelligenceServiceGrpcTransport, - Callable[[~.Credentials, type], ~.VideoIntelligenceServiceGrpcTransport]): A transport - instance, responsible for actually making the API calls. - The default transport uses the gRPC protocol. - This argument may also be a callable which returns a - transport instance. Callables will be sent the credentials - as the first argument and the default transport class as - the second argument. - channel (grpc.Channel): DEPRECATED. A ``Channel`` instance - through which to make calls. This argument is mutually exclusive - with ``credentials``; providing both will raise an exception. - credentials (google.auth.credentials.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If none - are specified, the client will attempt to ascertain the - credentials from the environment. - This argument is mutually exclusive with providing a - transport instance to ``transport``; doing so will raise - an exception. - client_config (dict): DEPRECATED. A dictionary of call options for - each method. If not specified, the default configuration is used. - client_info (google.api_core.gapic_v1.client_info.ClientInfo): - The client info used to send a user-agent string along with - API requests. If ``None``, then default info will be used. - Generally, you only need to set this if you're developing - your own client library. - client_options (Union[dict, google.api_core.client_options.ClientOptions]): - Client options used to set user options on the client. API Endpoint - should be set through client_options. - """ - # Raise deprecation warnings for things we want to go away. - if client_config is not None: - warnings.warn( - "The `client_config` argument is deprecated.", - PendingDeprecationWarning, - stacklevel=2, - ) - else: - client_config = video_intelligence_service_client_config.config - - if channel: - warnings.warn( - "The `channel` argument is deprecated; use " "`transport` instead.", - PendingDeprecationWarning, - stacklevel=2, - ) - - api_endpoint = self.SERVICE_ADDRESS - if client_options: - if type(client_options) == dict: - client_options = google.api_core.client_options.from_dict( - client_options - ) - if client_options.api_endpoint: - api_endpoint = client_options.api_endpoint - - # Instantiate the transport. - # The transport is responsible for handling serialization and - # deserialization and actually sending data to the service. - if transport: - if callable(transport): - self.transport = transport( - credentials=credentials, - default_class=video_intelligence_service_grpc_transport.VideoIntelligenceServiceGrpcTransport, - address=api_endpoint, - ) - else: - if credentials: - raise ValueError( - "Received both a transport instance and " - "credentials; these are mutually exclusive." - ) - self.transport = transport - else: - self.transport = video_intelligence_service_grpc_transport.VideoIntelligenceServiceGrpcTransport( - address=api_endpoint, channel=channel, credentials=credentials, - ) - - if client_info is None: - client_info = google.api_core.gapic_v1.client_info.ClientInfo( - gapic_version=_GAPIC_LIBRARY_VERSION, - ) - else: - client_info.gapic_version = _GAPIC_LIBRARY_VERSION - self._client_info = client_info - - # Parse out the default settings for retry and timeout for each RPC - # from the client configuration. - # (Ordinarily, these are the defaults specified in the `*_config.py` - # file next to this one.) - self._method_configs = google.api_core.gapic_v1.config.parse_method_configs( - client_config["interfaces"][self._INTERFACE_NAME], - ) - - # Save a dictionary of cached API call functions. - # These are the actual callables which invoke the proper - # transport methods, wrapped with `wrap_method` to add retry, - # timeout, and the like. - self._inner_api_calls = {} - - # Service calls - def annotate_video( - self, - features, - input_uri=None, - input_content=None, - video_context=None, - output_uri=None, - location_id=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Performs asynchronous video annotation. Progress and results can be - retrieved through the ``google.longrunning.Operations`` interface. - ``Operation.metadata`` contains ``AnnotateVideoProgress`` (progress). - ``Operation.response`` contains ``AnnotateVideoResponse`` (results). - - Example: - >>> from google.cloud import videointelligence_v1p2beta1 - >>> from google.cloud.videointelligence_v1p2beta1 import enums - >>> - >>> client = videointelligence_v1p2beta1.VideoIntelligenceServiceClient() - >>> - >>> features_element = enums.Feature.LABEL_DETECTION - >>> features = [features_element] - >>> input_uri = 'gs://cloud-samples-data/video/cat.mp4' - >>> - >>> response = client.annotate_video(features, input_uri=input_uri) - >>> - >>> def callback(operation_future): - ... # Handle result. - ... result = operation_future.result() - >>> - >>> response.add_done_callback(callback) - >>> - >>> # Handle metadata. - >>> metadata = response.metadata() - - Args: - features (list[~google.cloud.videointelligence_v1p2beta1.types.Feature]): Required. Requested video annotation features. - input_uri (str): Input video location. Currently, only `Google Cloud - Storage `__ URIs are supported, which - must be specified in the following format: ``gs://bucket-id/object-id`` - (other URI formats return ``google.rpc.Code.INVALID_ARGUMENT``). For - more information, see `Request - URIs `__. A - video URI may include wildcards in ``object-id``, and thus identify - multiple videos. Supported wildcards: '*' to match 0 or more characters; - '?' to match 1 character. If unset, the input video should be embedded - in the request as ``input_content``. If set, ``input_content`` should be - unset. - input_content (bytes): The video data bytes. If unset, the input video(s) should be - specified via ``input_uri``. If set, ``input_uri`` should be unset. - video_context (Union[dict, ~google.cloud.videointelligence_v1p2beta1.types.VideoContext]): Additional video context and/or feature-specific parameters. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.videointelligence_v1p2beta1.types.VideoContext` - output_uri (str): Optional. Location where the output (in JSON format) should be - stored. Currently, only `Google Cloud - Storage `__ URIs are supported, which - must be specified in the following format: ``gs://bucket-id/object-id`` - (other URI formats return ``google.rpc.Code.INVALID_ARGUMENT``). For - more information, see `Request - URIs `__. - location_id (str): Optional. Cloud region where annotation should take place. Supported - cloud regions: ``us-east1``, ``us-west1``, ``europe-west1``, - ``asia-east1``. If no region is specified, a region will be determined - based on video file location. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.videointelligence_v1p2beta1.types._OperationFuture` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "annotate_video" not in self._inner_api_calls: - self._inner_api_calls[ - "annotate_video" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.annotate_video, - default_retry=self._method_configs["AnnotateVideo"].retry, - default_timeout=self._method_configs["AnnotateVideo"].timeout, - client_info=self._client_info, - ) - - request = video_intelligence_pb2.AnnotateVideoRequest( - features=features, - input_uri=input_uri, - input_content=input_content, - video_context=video_context, - output_uri=output_uri, - location_id=location_id, - ) - operation = self._inner_api_calls["annotate_video"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - return google.api_core.operation.from_gapic( - operation, - self.transport._operations_client, - video_intelligence_pb2.AnnotateVideoResponse, - metadata_type=video_intelligence_pb2.AnnotateVideoProgress, - ) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/gapic/video_intelligence_service_client_config.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/gapic/video_intelligence_service_client_config.py deleted file mode 100644 index 104643b4f6ab..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/gapic/video_intelligence_service_client_config.py +++ /dev/null @@ -1,37 +0,0 @@ -config = { - "interfaces": { - "google.cloud.videointelligence.v1p2beta1.VideoIntelligenceService": { - "retry_codes": { - "retry_policy_1_codes": ["UNAVAILABLE", "DEADLINE_EXCEEDED"], - "no_retry_codes": [], - }, - "retry_params": { - "retry_policy_1_params": { - "initial_retry_delay_millis": 1000, - "retry_delay_multiplier": 2.5, - "max_retry_delay_millis": 120000, - "initial_rpc_timeout_millis": 600000, - "rpc_timeout_multiplier": 1.0, - "max_rpc_timeout_millis": 600000, - "total_timeout_millis": 600000, - }, - "no_retry_params": { - "initial_retry_delay_millis": 0, - "retry_delay_multiplier": 0.0, - "max_retry_delay_millis": 0, - "initial_rpc_timeout_millis": 0, - "rpc_timeout_multiplier": 1.0, - "max_rpc_timeout_millis": 0, - "total_timeout_millis": 0, - }, - }, - "methods": { - "AnnotateVideo": { - "timeout_millis": 600000, - "retry_codes_name": "retry_policy_1_codes", - "retry_params_name": "retry_policy_1_params", - } - }, - } - } -} diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/proto/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/proto/__init__.py deleted file mode 100644 index e69de29bb2d1..000000000000 diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/proto/video_intelligence.proto b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/proto/video_intelligence.proto deleted file mode 100644 index 3b1d51cd7754..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/proto/video_intelligence.proto +++ /dev/null @@ -1,476 +0,0 @@ -// Copyright 2019 Google LLC. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. -// - -syntax = "proto3"; - -package google.cloud.videointelligence.v1p2beta1; - -import "google/api/annotations.proto"; -import "google/api/client.proto"; -import "google/api/field_behavior.proto"; -import "google/longrunning/operations.proto"; -import "google/protobuf/duration.proto"; -import "google/protobuf/timestamp.proto"; -import "google/rpc/status.proto"; - -option csharp_namespace = "Google.Cloud.VideoIntelligence.V1P2Beta1"; -option go_package = "google.golang.org/genproto/googleapis/cloud/videointelligence/v1p2beta1;videointelligence"; -option java_multiple_files = true; -option java_outer_classname = "VideoIntelligenceServiceProto"; -option java_package = "com.google.cloud.videointelligence.v1p2beta1"; -option php_namespace = "Google\\Cloud\\VideoIntelligence\\V1p2beta1"; -option ruby_package = "Google::Cloud::VideoIntelligence::V1p2beta1"; - -// Service that implements Google Cloud Video Intelligence API. -service VideoIntelligenceService { - option (google.api.default_host) = "videointelligence.googleapis.com"; - option (google.api.oauth_scopes) = "https://www.googleapis.com/auth/cloud-platform"; - - // Performs asynchronous video annotation. Progress and results can be - // retrieved through the `google.longrunning.Operations` interface. - // `Operation.metadata` contains `AnnotateVideoProgress` (progress). - // `Operation.response` contains `AnnotateVideoResponse` (results). - rpc AnnotateVideo(AnnotateVideoRequest) returns (google.longrunning.Operation) { - option (google.api.http) = { - post: "/v1p2beta1/videos:annotate" - body: "*" - }; - option (google.api.method_signature) = "input_uri,features"; - option (google.longrunning.operation_info) = { - response_type: "AnnotateVideoResponse" - metadata_type: "AnnotateVideoProgress" - }; - } -} - -// Video annotation request. -message AnnotateVideoRequest { - // Input video location. Currently, only - // [Google Cloud Storage](https://cloud.google.com/storage/) URIs are - // supported, which must be specified in the following format: - // `gs://bucket-id/object-id` (other URI formats return - // [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For more information, see - // [Request URIs](https://cloud.google.com/storage/docs/request-endpoints). - // A video URI may include wildcards in `object-id`, and thus identify - // multiple videos. Supported wildcards: '*' to match 0 or more characters; - // '?' to match 1 character. If unset, the input video should be embedded - // in the request as `input_content`. If set, `input_content` should be unset. - string input_uri = 1; - - // The video data bytes. - // If unset, the input video(s) should be specified via `input_uri`. - // If set, `input_uri` should be unset. - bytes input_content = 6; - - // Required. Requested video annotation features. - repeated Feature features = 2 [(google.api.field_behavior) = REQUIRED]; - - // Additional video context and/or feature-specific parameters. - VideoContext video_context = 3; - - // Optional. Location where the output (in JSON format) should be stored. - // Currently, only [Google Cloud Storage](https://cloud.google.com/storage/) - // URIs are supported, which must be specified in the following format: - // `gs://bucket-id/object-id` (other URI formats return - // [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For more information, see - // [Request URIs](https://cloud.google.com/storage/docs/request-endpoints). - string output_uri = 4 [(google.api.field_behavior) = OPTIONAL]; - - // Optional. Cloud region where annotation should take place. Supported cloud - // regions: `us-east1`, `us-west1`, `europe-west1`, `asia-east1`. If no region - // is specified, a region will be determined based on video file location. - string location_id = 5 [(google.api.field_behavior) = OPTIONAL]; -} - -// Video context and/or feature-specific parameters. -message VideoContext { - // Video segments to annotate. The segments may overlap and are not required - // to be contiguous or span the whole video. If unspecified, each video is - // treated as a single segment. - repeated VideoSegment segments = 1; - - // Config for LABEL_DETECTION. - LabelDetectionConfig label_detection_config = 2; - - // Config for SHOT_CHANGE_DETECTION. - ShotChangeDetectionConfig shot_change_detection_config = 3; - - // Config for EXPLICIT_CONTENT_DETECTION. - ExplicitContentDetectionConfig explicit_content_detection_config = 4; - - // Config for TEXT_DETECTION. - TextDetectionConfig text_detection_config = 8; -} - -// Config for LABEL_DETECTION. -message LabelDetectionConfig { - // What labels should be detected with LABEL_DETECTION, in addition to - // video-level labels or segment-level labels. - // If unspecified, defaults to `SHOT_MODE`. - LabelDetectionMode label_detection_mode = 1; - - // Whether the video has been shot from a stationary (i.e. non-moving) camera. - // When set to true, might improve detection accuracy for moving objects. - // Should be used with `SHOT_AND_FRAME_MODE` enabled. - bool stationary_camera = 2; - - // Model to use for label detection. - // Supported values: "builtin/stable" (the default if unset) and - // "builtin/latest". - string model = 3; -} - -// Config for SHOT_CHANGE_DETECTION. -message ShotChangeDetectionConfig { - // Model to use for shot change detection. - // Supported values: "builtin/stable" (the default if unset) and - // "builtin/latest". - string model = 1; -} - -// Config for EXPLICIT_CONTENT_DETECTION. -message ExplicitContentDetectionConfig { - // Model to use for explicit content detection. - // Supported values: "builtin/stable" (the default if unset) and - // "builtin/latest". - string model = 1; -} - -// Config for TEXT_DETECTION. -message TextDetectionConfig { - // Language hint can be specified if the language to be detected is known a - // priori. It can increase the accuracy of the detection. Language hint must - // be language code in BCP-47 format. - // - // Automatic language detection is performed if no hint is provided. - repeated string language_hints = 1; -} - -// Video segment. -message VideoSegment { - // Time-offset, relative to the beginning of the video, - // corresponding to the start of the segment (inclusive). - google.protobuf.Duration start_time_offset = 1; - - // Time-offset, relative to the beginning of the video, - // corresponding to the end of the segment (inclusive). - google.protobuf.Duration end_time_offset = 2; -} - -// Video segment level annotation results for label detection. -message LabelSegment { - // Video segment where a label was detected. - VideoSegment segment = 1; - - // Confidence that the label is accurate. Range: [0, 1]. - float confidence = 2; -} - -// Video frame level annotation results for label detection. -message LabelFrame { - // Time-offset, relative to the beginning of the video, corresponding to the - // video frame for this location. - google.protobuf.Duration time_offset = 1; - - // Confidence that the label is accurate. Range: [0, 1]. - float confidence = 2; -} - -// Detected entity from video analysis. -message Entity { - // Opaque entity ID. Some IDs may be available in - // [Google Knowledge Graph Search - // API](https://developers.google.com/knowledge-graph/). - string entity_id = 1; - - // Textual description, e.g. `Fixed-gear bicycle`. - string description = 2; - - // Language code for `description` in BCP-47 format. - string language_code = 3; -} - -// Label annotation. -message LabelAnnotation { - // Detected entity. - Entity entity = 1; - - // Common categories for the detected entity. - // E.g. when the label is `Terrier` the category is likely `dog`. And in some - // cases there might be more than one categories e.g. `Terrier` could also be - // a `pet`. - repeated Entity category_entities = 2; - - // All video segments where a label was detected. - repeated LabelSegment segments = 3; - - // All video frames where a label was detected. - repeated LabelFrame frames = 4; -} - -// Video frame level annotation results for explicit content. -message ExplicitContentFrame { - // Time-offset, relative to the beginning of the video, corresponding to the - // video frame for this location. - google.protobuf.Duration time_offset = 1; - - // Likelihood of the pornography content.. - Likelihood pornography_likelihood = 2; -} - -// Explicit content annotation (based on per-frame visual signals only). -// If no explicit content has been detected in a frame, no annotations are -// present for that frame. -message ExplicitContentAnnotation { - // All video frames where explicit content was detected. - repeated ExplicitContentFrame frames = 1; -} - -// Normalized bounding box. -// The normalized vertex coordinates are relative to the original image. -// Range: [0, 1]. -message NormalizedBoundingBox { - // Left X coordinate. - float left = 1; - - // Top Y coordinate. - float top = 2; - - // Right X coordinate. - float right = 3; - - // Bottom Y coordinate. - float bottom = 4; -} - -// Annotation results for a single video. -message VideoAnnotationResults { - // Video file location in - // [Google Cloud Storage](https://cloud.google.com/storage/). - string input_uri = 1; - - // Label annotations on video level or user specified segment level. - // There is exactly one element for each unique label. - repeated LabelAnnotation segment_label_annotations = 2; - - // Label annotations on shot level. - // There is exactly one element for each unique label. - repeated LabelAnnotation shot_label_annotations = 3; - - // Label annotations on frame level. - // There is exactly one element for each unique label. - repeated LabelAnnotation frame_label_annotations = 4; - - // Shot annotations. Each shot is represented as a video segment. - repeated VideoSegment shot_annotations = 6; - - // Explicit content annotation. - ExplicitContentAnnotation explicit_annotation = 7; - - // OCR text detection and tracking. - // Annotations for list of detected text snippets. Each will have list of - // frame information associated with it. - repeated TextAnnotation text_annotations = 12; - - // Annotations for list of objects detected and tracked in video. - repeated ObjectTrackingAnnotation object_annotations = 14; - - // If set, indicates an error. Note that for a single `AnnotateVideoRequest` - // some videos may succeed and some may fail. - google.rpc.Status error = 9; -} - -// Video annotation response. Included in the `response` -// field of the `Operation` returned by the `GetOperation` -// call of the `google::longrunning::Operations` service. -message AnnotateVideoResponse { - // Annotation results for all videos specified in `AnnotateVideoRequest`. - repeated VideoAnnotationResults annotation_results = 1; -} - -// Annotation progress for a single video. -message VideoAnnotationProgress { - // Video file location in - // [Google Cloud Storage](https://cloud.google.com/storage/). - string input_uri = 1; - - // Approximate percentage processed thus far. Guaranteed to be - // 100 when fully processed. - int32 progress_percent = 2; - - // Time when the request was received. - google.protobuf.Timestamp start_time = 3; - - // Time of the most recent update. - google.protobuf.Timestamp update_time = 4; -} - -// Video annotation progress. Included in the `metadata` -// field of the `Operation` returned by the `GetOperation` -// call of the `google::longrunning::Operations` service. -message AnnotateVideoProgress { - // Progress metadata for all videos specified in `AnnotateVideoRequest`. - repeated VideoAnnotationProgress annotation_progress = 1; -} - -// A vertex represents a 2D point in the image. -// NOTE: the normalized vertex coordinates are relative to the original image -// and range from 0 to 1. -message NormalizedVertex { - // X coordinate. - float x = 1; - - // Y coordinate. - float y = 2; -} - -// Normalized bounding polygon for text (that might not be aligned with axis). -// Contains list of the corner points in clockwise order starting from -// top-left corner. For example, for a rectangular bounding box: -// When the text is horizontal it might look like: -// 0----1 -// | | -// 3----2 -// -// When it's clockwise rotated 180 degrees around the top-left corner it -// becomes: -// 2----3 -// | | -// 1----0 -// -// and the vertex order will still be (0, 1, 2, 3). Note that values can be less -// than 0, or greater than 1 due to trignometric calculations for location of -// the box. -message NormalizedBoundingPoly { - // Normalized vertices of the bounding polygon. - repeated NormalizedVertex vertices = 1; -} - -// Video segment level annotation results for text detection. -message TextSegment { - // Video segment where a text snippet was detected. - VideoSegment segment = 1; - - // Confidence for the track of detected text. It is calculated as the highest - // over all frames where OCR detected text appears. - float confidence = 2; - - // Information related to the frames where OCR detected text appears. - repeated TextFrame frames = 3; -} - -// Video frame level annotation results for text annotation (OCR). -// Contains information regarding timestamp and bounding box locations for the -// frames containing detected OCR text snippets. -message TextFrame { - // Bounding polygon of the detected text for this frame. - NormalizedBoundingPoly rotated_bounding_box = 1; - - // Timestamp of this frame. - google.protobuf.Duration time_offset = 2; -} - -// Annotations related to one detected OCR text snippet. This will contain the -// corresponding text, confidence value, and frame level information for each -// detection. -message TextAnnotation { - // The detected text. - string text = 1; - - // All video segments where OCR detected text appears. - repeated TextSegment segments = 2; -} - -// Video frame level annotations for object detection and tracking. This field -// stores per frame location, time offset, and confidence. -message ObjectTrackingFrame { - // The normalized bounding box location of this object track for the frame. - NormalizedBoundingBox normalized_bounding_box = 1; - - // The timestamp of the frame in microseconds. - google.protobuf.Duration time_offset = 2; -} - -// Annotations corresponding to one tracked object. -message ObjectTrackingAnnotation { - // Entity to specify the object category that this track is labeled as. - Entity entity = 1; - - // Object category's labeling confidence of this track. - float confidence = 4; - - // Information corresponding to all frames where this object track appears. - repeated ObjectTrackingFrame frames = 2; - - // Each object track corresponds to one video segment where it appears. - VideoSegment segment = 3; -} - -// Video annotation feature. -enum Feature { - // Unspecified. - FEATURE_UNSPECIFIED = 0; - - // Label detection. Detect objects, such as dog or flower. - LABEL_DETECTION = 1; - - // Shot change detection. - SHOT_CHANGE_DETECTION = 2; - - // Explicit content detection. - EXPLICIT_CONTENT_DETECTION = 3; - - // OCR text detection and tracking. - TEXT_DETECTION = 7; - - // Object detection and tracking. - OBJECT_TRACKING = 9; -} - -// Label detection mode. -enum LabelDetectionMode { - // Unspecified. - LABEL_DETECTION_MODE_UNSPECIFIED = 0; - - // Detect shot-level labels. - SHOT_MODE = 1; - - // Detect frame-level labels. - FRAME_MODE = 2; - - // Detect both shot-level and frame-level labels. - SHOT_AND_FRAME_MODE = 3; -} - -// Bucketized representation of likelihood. -enum Likelihood { - // Unspecified likelihood. - LIKELIHOOD_UNSPECIFIED = 0; - - // Very unlikely. - VERY_UNLIKELY = 1; - - // Unlikely. - UNLIKELY = 2; - - // Possible. - POSSIBLE = 3; - - // Likely. - LIKELY = 4; - - // Very likely. - VERY_LIKELY = 5; -} diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/proto/video_intelligence_pb2.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/proto/video_intelligence_pb2.py deleted file mode 100644 index 5d55b50ff87e..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/proto/video_intelligence_pb2.py +++ /dev/null @@ -1,2853 +0,0 @@ -# -*- coding: utf-8 -*- -# Generated by the protocol buffer compiler. DO NOT EDIT! -# source: google/cloud/videointelligence_v1p2beta1/proto/video_intelligence.proto -"""Generated protocol buffer code.""" -from google.protobuf.internal import enum_type_wrapper -from google.protobuf import descriptor as _descriptor -from google.protobuf import message as _message -from google.protobuf import reflection as _reflection -from google.protobuf import symbol_database as _symbol_database - -# @@protoc_insertion_point(imports) - -_sym_db = _symbol_database.Default() - - -from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2 -from google.api import client_pb2 as google_dot_api_dot_client__pb2 -from google.api import field_behavior_pb2 as google_dot_api_dot_field__behavior__pb2 -from google.longrunning import ( - operations_pb2 as google_dot_longrunning_dot_operations__pb2, -) -from google.protobuf import duration_pb2 as google_dot_protobuf_dot_duration__pb2 -from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2 -from google.rpc import status_pb2 as google_dot_rpc_dot_status__pb2 - - -DESCRIPTOR = _descriptor.FileDescriptor( - name="google/cloud/videointelligence_v1p2beta1/proto/video_intelligence.proto", - package="google.cloud.videointelligence.v1p2beta1", - syntax="proto3", - serialized_options=b"\n,com.google.cloud.videointelligence.v1p2beta1B\035VideoIntelligenceServiceProtoP\001ZYgoogle.golang.org/genproto/googleapis/cloud/videointelligence/v1p2beta1;videointelligence\252\002(Google.Cloud.VideoIntelligence.V1P2Beta1\312\002(Google\\Cloud\\VideoIntelligence\\V1p2beta1\352\002+Google::Cloud::VideoIntelligence::V1p2beta1", - create_key=_descriptor._internal_create_key, - serialized_pb=b'\nGgoogle/cloud/videointelligence_v1p2beta1/proto/video_intelligence.proto\x12(google.cloud.videointelligence.v1p2beta1\x1a\x1cgoogle/api/annotations.proto\x1a\x17google/api/client.proto\x1a\x1fgoogle/api/field_behavior.proto\x1a#google/longrunning/operations.proto\x1a\x1egoogle/protobuf/duration.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x17google/rpc/status.proto"\x8c\x02\n\x14\x41nnotateVideoRequest\x12\x11\n\tinput_uri\x18\x01 \x01(\t\x12\x15\n\rinput_content\x18\x06 \x01(\x0c\x12H\n\x08\x66\x65\x61tures\x18\x02 \x03(\x0e\x32\x31.google.cloud.videointelligence.v1p2beta1.FeatureB\x03\xe0\x41\x02\x12M\n\rvideo_context\x18\x03 \x01(\x0b\x32\x36.google.cloud.videointelligence.v1p2beta1.VideoContext\x12\x17\n\noutput_uri\x18\x04 \x01(\tB\x03\xe0\x41\x01\x12\x18\n\x0blocation_id\x18\x05 \x01(\tB\x03\xe0\x41\x01"\xf6\x03\n\x0cVideoContext\x12H\n\x08segments\x18\x01 \x03(\x0b\x32\x36.google.cloud.videointelligence.v1p2beta1.VideoSegment\x12^\n\x16label_detection_config\x18\x02 \x01(\x0b\x32>.google.cloud.videointelligence.v1p2beta1.LabelDetectionConfig\x12i\n\x1cshot_change_detection_config\x18\x03 \x01(\x0b\x32\x43.google.cloud.videointelligence.v1p2beta1.ShotChangeDetectionConfig\x12s\n!explicit_content_detection_config\x18\x04 \x01(\x0b\x32H.google.cloud.videointelligence.v1p2beta1.ExplicitContentDetectionConfig\x12\\\n\x15text_detection_config\x18\x08 \x01(\x0b\x32=.google.cloud.videointelligence.v1p2beta1.TextDetectionConfig"\x9c\x01\n\x14LabelDetectionConfig\x12Z\n\x14label_detection_mode\x18\x01 \x01(\x0e\x32<.google.cloud.videointelligence.v1p2beta1.LabelDetectionMode\x12\x19\n\x11stationary_camera\x18\x02 \x01(\x08\x12\r\n\x05model\x18\x03 \x01(\t"*\n\x19ShotChangeDetectionConfig\x12\r\n\x05model\x18\x01 \x01(\t"/\n\x1e\x45xplicitContentDetectionConfig\x12\r\n\x05model\x18\x01 \x01(\t"-\n\x13TextDetectionConfig\x12\x16\n\x0elanguage_hints\x18\x01 \x03(\t"x\n\x0cVideoSegment\x12\x34\n\x11start_time_offset\x18\x01 \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x32\n\x0f\x65nd_time_offset\x18\x02 \x01(\x0b\x32\x19.google.protobuf.Duration"k\n\x0cLabelSegment\x12G\n\x07segment\x18\x01 \x01(\x0b\x32\x36.google.cloud.videointelligence.v1p2beta1.VideoSegment\x12\x12\n\nconfidence\x18\x02 \x01(\x02"P\n\nLabelFrame\x12.\n\x0btime_offset\x18\x01 \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x12\n\nconfidence\x18\x02 \x01(\x02"G\n\x06\x45ntity\x12\x11\n\tentity_id\x18\x01 \x01(\t\x12\x13\n\x0b\x64\x65scription\x18\x02 \x01(\t\x12\x15\n\rlanguage_code\x18\x03 \x01(\t"\xb0\x02\n\x0fLabelAnnotation\x12@\n\x06\x65ntity\x18\x01 \x01(\x0b\x32\x30.google.cloud.videointelligence.v1p2beta1.Entity\x12K\n\x11\x63\x61tegory_entities\x18\x02 \x03(\x0b\x32\x30.google.cloud.videointelligence.v1p2beta1.Entity\x12H\n\x08segments\x18\x03 \x03(\x0b\x32\x36.google.cloud.videointelligence.v1p2beta1.LabelSegment\x12\x44\n\x06\x66rames\x18\x04 \x03(\x0b\x32\x34.google.cloud.videointelligence.v1p2beta1.LabelFrame"\x9c\x01\n\x14\x45xplicitContentFrame\x12.\n\x0btime_offset\x18\x01 \x01(\x0b\x32\x19.google.protobuf.Duration\x12T\n\x16pornography_likelihood\x18\x02 \x01(\x0e\x32\x34.google.cloud.videointelligence.v1p2beta1.Likelihood"k\n\x19\x45xplicitContentAnnotation\x12N\n\x06\x66rames\x18\x01 \x03(\x0b\x32>.google.cloud.videointelligence.v1p2beta1.ExplicitContentFrame"Q\n\x15NormalizedBoundingBox\x12\x0c\n\x04left\x18\x01 \x01(\x02\x12\x0b\n\x03top\x18\x02 \x01(\x02\x12\r\n\x05right\x18\x03 \x01(\x02\x12\x0e\n\x06\x62ottom\x18\x04 \x01(\x02"\xcb\x05\n\x16VideoAnnotationResults\x12\x11\n\tinput_uri\x18\x01 \x01(\t\x12\\\n\x19segment_label_annotations\x18\x02 \x03(\x0b\x32\x39.google.cloud.videointelligence.v1p2beta1.LabelAnnotation\x12Y\n\x16shot_label_annotations\x18\x03 \x03(\x0b\x32\x39.google.cloud.videointelligence.v1p2beta1.LabelAnnotation\x12Z\n\x17\x66rame_label_annotations\x18\x04 \x03(\x0b\x32\x39.google.cloud.videointelligence.v1p2beta1.LabelAnnotation\x12P\n\x10shot_annotations\x18\x06 \x03(\x0b\x32\x36.google.cloud.videointelligence.v1p2beta1.VideoSegment\x12`\n\x13\x65xplicit_annotation\x18\x07 \x01(\x0b\x32\x43.google.cloud.videointelligence.v1p2beta1.ExplicitContentAnnotation\x12R\n\x10text_annotations\x18\x0c \x03(\x0b\x32\x38.google.cloud.videointelligence.v1p2beta1.TextAnnotation\x12^\n\x12object_annotations\x18\x0e \x03(\x0b\x32\x42.google.cloud.videointelligence.v1p2beta1.ObjectTrackingAnnotation\x12!\n\x05\x65rror\x18\t \x01(\x0b\x32\x12.google.rpc.Status"u\n\x15\x41nnotateVideoResponse\x12\\\n\x12\x61nnotation_results\x18\x01 \x03(\x0b\x32@.google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults"\xa7\x01\n\x17VideoAnnotationProgress\x12\x11\n\tinput_uri\x18\x01 \x01(\t\x12\x18\n\x10progress_percent\x18\x02 \x01(\x05\x12.\n\nstart_time\x18\x03 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12/\n\x0bupdate_time\x18\x04 \x01(\x0b\x32\x1a.google.protobuf.Timestamp"w\n\x15\x41nnotateVideoProgress\x12^\n\x13\x61nnotation_progress\x18\x01 \x03(\x0b\x32\x41.google.cloud.videointelligence.v1p2beta1.VideoAnnotationProgress"(\n\x10NormalizedVertex\x12\t\n\x01x\x18\x01 \x01(\x02\x12\t\n\x01y\x18\x02 \x01(\x02"f\n\x16NormalizedBoundingPoly\x12L\n\x08vertices\x18\x01 \x03(\x0b\x32:.google.cloud.videointelligence.v1p2beta1.NormalizedVertex"\xaf\x01\n\x0bTextSegment\x12G\n\x07segment\x18\x01 \x01(\x0b\x32\x36.google.cloud.videointelligence.v1p2beta1.VideoSegment\x12\x12\n\nconfidence\x18\x02 \x01(\x02\x12\x43\n\x06\x66rames\x18\x03 \x03(\x0b\x32\x33.google.cloud.videointelligence.v1p2beta1.TextFrame"\x9b\x01\n\tTextFrame\x12^\n\x14rotated_bounding_box\x18\x01 \x01(\x0b\x32@.google.cloud.videointelligence.v1p2beta1.NormalizedBoundingPoly\x12.\n\x0btime_offset\x18\x02 \x01(\x0b\x32\x19.google.protobuf.Duration"g\n\x0eTextAnnotation\x12\x0c\n\x04text\x18\x01 \x01(\t\x12G\n\x08segments\x18\x02 \x03(\x0b\x32\x35.google.cloud.videointelligence.v1p2beta1.TextSegment"\xa7\x01\n\x13ObjectTrackingFrame\x12`\n\x17normalized_bounding_box\x18\x01 \x01(\x0b\x32?.google.cloud.videointelligence.v1p2beta1.NormalizedBoundingBox\x12.\n\x0btime_offset\x18\x02 \x01(\x0b\x32\x19.google.protobuf.Duration"\x88\x02\n\x18ObjectTrackingAnnotation\x12@\n\x06\x65ntity\x18\x01 \x01(\x0b\x32\x30.google.cloud.videointelligence.v1p2beta1.Entity\x12\x12\n\nconfidence\x18\x04 \x01(\x02\x12M\n\x06\x66rames\x18\x02 \x03(\x0b\x32=.google.cloud.videointelligence.v1p2beta1.ObjectTrackingFrame\x12G\n\x07segment\x18\x03 \x01(\x0b\x32\x36.google.cloud.videointelligence.v1p2beta1.VideoSegment*\x9b\x01\n\x07\x46\x65\x61ture\x12\x17\n\x13\x46\x45\x41TURE_UNSPECIFIED\x10\x00\x12\x13\n\x0fLABEL_DETECTION\x10\x01\x12\x19\n\x15SHOT_CHANGE_DETECTION\x10\x02\x12\x1e\n\x1a\x45XPLICIT_CONTENT_DETECTION\x10\x03\x12\x12\n\x0eTEXT_DETECTION\x10\x07\x12\x13\n\x0fOBJECT_TRACKING\x10\t*r\n\x12LabelDetectionMode\x12$\n LABEL_DETECTION_MODE_UNSPECIFIED\x10\x00\x12\r\n\tSHOT_MODE\x10\x01\x12\x0e\n\nFRAME_MODE\x10\x02\x12\x17\n\x13SHOT_AND_FRAME_MODE\x10\x03*t\n\nLikelihood\x12\x1a\n\x16LIKELIHOOD_UNSPECIFIED\x10\x00\x12\x11\n\rVERY_UNLIKELY\x10\x01\x12\x0c\n\x08UNLIKELY\x10\x02\x12\x0c\n\x08POSSIBLE\x10\x03\x12\n\n\x06LIKELY\x10\x04\x12\x0f\n\x0bVERY_LIKELY\x10\x05\x32\xce\x02\n\x18VideoIntelligenceService\x12\xdb\x01\n\rAnnotateVideo\x12>.google.cloud.videointelligence.v1p2beta1.AnnotateVideoRequest\x1a\x1d.google.longrunning.Operation"k\x82\xd3\xe4\x93\x02\x1f"\x1a/v1p2beta1/videos:annotate:\x01*\xda\x41\x12input_uri,features\xca\x41.\n\x15\x41nnotateVideoResponse\x12\x15\x41nnotateVideoProgress\x1aT\xca\x41 videointelligence.googleapis.com\xd2\x41.https://www.googleapis.com/auth/cloud-platformB\xae\x02\n,com.google.cloud.videointelligence.v1p2beta1B\x1dVideoIntelligenceServiceProtoP\x01ZYgoogle.golang.org/genproto/googleapis/cloud/videointelligence/v1p2beta1;videointelligence\xaa\x02(Google.Cloud.VideoIntelligence.V1P2Beta1\xca\x02(Google\\Cloud\\VideoIntelligence\\V1p2beta1\xea\x02+Google::Cloud::VideoIntelligence::V1p2beta1b\x06proto3', - dependencies=[ - google_dot_api_dot_annotations__pb2.DESCRIPTOR, - google_dot_api_dot_client__pb2.DESCRIPTOR, - google_dot_api_dot_field__behavior__pb2.DESCRIPTOR, - google_dot_longrunning_dot_operations__pb2.DESCRIPTOR, - google_dot_protobuf_dot_duration__pb2.DESCRIPTOR, - google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR, - google_dot_rpc_dot_status__pb2.DESCRIPTOR, - ], -) - -_FEATURE = _descriptor.EnumDescriptor( - name="Feature", - full_name="google.cloud.videointelligence.v1p2beta1.Feature", - filename=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - values=[ - _descriptor.EnumValueDescriptor( - name="FEATURE_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="LABEL_DETECTION", - index=1, - number=1, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="SHOT_CHANGE_DETECTION", - index=2, - number=2, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="EXPLICIT_CONTENT_DETECTION", - index=3, - number=3, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="TEXT_DETECTION", - index=4, - number=7, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="OBJECT_TRACKING", - index=5, - number=9, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=4604, - serialized_end=4759, -) -_sym_db.RegisterEnumDescriptor(_FEATURE) - -Feature = enum_type_wrapper.EnumTypeWrapper(_FEATURE) -_LABELDETECTIONMODE = _descriptor.EnumDescriptor( - name="LabelDetectionMode", - full_name="google.cloud.videointelligence.v1p2beta1.LabelDetectionMode", - filename=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - values=[ - _descriptor.EnumValueDescriptor( - name="LABEL_DETECTION_MODE_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="SHOT_MODE", - index=1, - number=1, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="FRAME_MODE", - index=2, - number=2, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="SHOT_AND_FRAME_MODE", - index=3, - number=3, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=4761, - serialized_end=4875, -) -_sym_db.RegisterEnumDescriptor(_LABELDETECTIONMODE) - -LabelDetectionMode = enum_type_wrapper.EnumTypeWrapper(_LABELDETECTIONMODE) -_LIKELIHOOD = _descriptor.EnumDescriptor( - name="Likelihood", - full_name="google.cloud.videointelligence.v1p2beta1.Likelihood", - filename=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - values=[ - _descriptor.EnumValueDescriptor( - name="LIKELIHOOD_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="VERY_UNLIKELY", - index=1, - number=1, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="UNLIKELY", - index=2, - number=2, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="POSSIBLE", - index=3, - number=3, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="LIKELY", - index=4, - number=4, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="VERY_LIKELY", - index=5, - number=5, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=4877, - serialized_end=4993, -) -_sym_db.RegisterEnumDescriptor(_LIKELIHOOD) - -Likelihood = enum_type_wrapper.EnumTypeWrapper(_LIKELIHOOD) -FEATURE_UNSPECIFIED = 0 -LABEL_DETECTION = 1 -SHOT_CHANGE_DETECTION = 2 -EXPLICIT_CONTENT_DETECTION = 3 -TEXT_DETECTION = 7 -OBJECT_TRACKING = 9 -LABEL_DETECTION_MODE_UNSPECIFIED = 0 -SHOT_MODE = 1 -FRAME_MODE = 2 -SHOT_AND_FRAME_MODE = 3 -LIKELIHOOD_UNSPECIFIED = 0 -VERY_UNLIKELY = 1 -UNLIKELY = 2 -POSSIBLE = 3 -LIKELY = 4 -VERY_LIKELY = 5 - - -_ANNOTATEVIDEOREQUEST = _descriptor.Descriptor( - name="AnnotateVideoRequest", - full_name="google.cloud.videointelligence.v1p2beta1.AnnotateVideoRequest", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="input_uri", - full_name="google.cloud.videointelligence.v1p2beta1.AnnotateVideoRequest.input_uri", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="input_content", - full_name="google.cloud.videointelligence.v1p2beta1.AnnotateVideoRequest.input_content", - index=1, - number=6, - type=12, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"", - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="features", - full_name="google.cloud.videointelligence.v1p2beta1.AnnotateVideoRequest.features", - index=2, - number=2, - type=14, - cpp_type=8, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\002", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="video_context", - full_name="google.cloud.videointelligence.v1p2beta1.AnnotateVideoRequest.video_context", - index=3, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="output_uri", - full_name="google.cloud.videointelligence.v1p2beta1.AnnotateVideoRequest.output_uri", - index=4, - number=4, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="location_id", - full_name="google.cloud.videointelligence.v1p2beta1.AnnotateVideoRequest.location_id", - index=5, - number=5, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=333, - serialized_end=601, -) - - -_VIDEOCONTEXT = _descriptor.Descriptor( - name="VideoContext", - full_name="google.cloud.videointelligence.v1p2beta1.VideoContext", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="segments", - full_name="google.cloud.videointelligence.v1p2beta1.VideoContext.segments", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="label_detection_config", - full_name="google.cloud.videointelligence.v1p2beta1.VideoContext.label_detection_config", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="shot_change_detection_config", - full_name="google.cloud.videointelligence.v1p2beta1.VideoContext.shot_change_detection_config", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="explicit_content_detection_config", - full_name="google.cloud.videointelligence.v1p2beta1.VideoContext.explicit_content_detection_config", - index=3, - number=4, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="text_detection_config", - full_name="google.cloud.videointelligence.v1p2beta1.VideoContext.text_detection_config", - index=4, - number=8, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=604, - serialized_end=1106, -) - - -_LABELDETECTIONCONFIG = _descriptor.Descriptor( - name="LabelDetectionConfig", - full_name="google.cloud.videointelligence.v1p2beta1.LabelDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="label_detection_mode", - full_name="google.cloud.videointelligence.v1p2beta1.LabelDetectionConfig.label_detection_mode", - index=0, - number=1, - type=14, - cpp_type=8, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="stationary_camera", - full_name="google.cloud.videointelligence.v1p2beta1.LabelDetectionConfig.stationary_camera", - index=1, - number=2, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="model", - full_name="google.cloud.videointelligence.v1p2beta1.LabelDetectionConfig.model", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1109, - serialized_end=1265, -) - - -_SHOTCHANGEDETECTIONCONFIG = _descriptor.Descriptor( - name="ShotChangeDetectionConfig", - full_name="google.cloud.videointelligence.v1p2beta1.ShotChangeDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="model", - full_name="google.cloud.videointelligence.v1p2beta1.ShotChangeDetectionConfig.model", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1267, - serialized_end=1309, -) - - -_EXPLICITCONTENTDETECTIONCONFIG = _descriptor.Descriptor( - name="ExplicitContentDetectionConfig", - full_name="google.cloud.videointelligence.v1p2beta1.ExplicitContentDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="model", - full_name="google.cloud.videointelligence.v1p2beta1.ExplicitContentDetectionConfig.model", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1311, - serialized_end=1358, -) - - -_TEXTDETECTIONCONFIG = _descriptor.Descriptor( - name="TextDetectionConfig", - full_name="google.cloud.videointelligence.v1p2beta1.TextDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="language_hints", - full_name="google.cloud.videointelligence.v1p2beta1.TextDetectionConfig.language_hints", - index=0, - number=1, - type=9, - cpp_type=9, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1360, - serialized_end=1405, -) - - -_VIDEOSEGMENT = _descriptor.Descriptor( - name="VideoSegment", - full_name="google.cloud.videointelligence.v1p2beta1.VideoSegment", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="start_time_offset", - full_name="google.cloud.videointelligence.v1p2beta1.VideoSegment.start_time_offset", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="end_time_offset", - full_name="google.cloud.videointelligence.v1p2beta1.VideoSegment.end_time_offset", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1407, - serialized_end=1527, -) - - -_LABELSEGMENT = _descriptor.Descriptor( - name="LabelSegment", - full_name="google.cloud.videointelligence.v1p2beta1.LabelSegment", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="segment", - full_name="google.cloud.videointelligence.v1p2beta1.LabelSegment.segment", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1p2beta1.LabelSegment.confidence", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1529, - serialized_end=1636, -) - - -_LABELFRAME = _descriptor.Descriptor( - name="LabelFrame", - full_name="google.cloud.videointelligence.v1p2beta1.LabelFrame", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="time_offset", - full_name="google.cloud.videointelligence.v1p2beta1.LabelFrame.time_offset", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1p2beta1.LabelFrame.confidence", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1638, - serialized_end=1718, -) - - -_ENTITY = _descriptor.Descriptor( - name="Entity", - full_name="google.cloud.videointelligence.v1p2beta1.Entity", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="entity_id", - full_name="google.cloud.videointelligence.v1p2beta1.Entity.entity_id", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="description", - full_name="google.cloud.videointelligence.v1p2beta1.Entity.description", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="language_code", - full_name="google.cloud.videointelligence.v1p2beta1.Entity.language_code", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1720, - serialized_end=1791, -) - - -_LABELANNOTATION = _descriptor.Descriptor( - name="LabelAnnotation", - full_name="google.cloud.videointelligence.v1p2beta1.LabelAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="entity", - full_name="google.cloud.videointelligence.v1p2beta1.LabelAnnotation.entity", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="category_entities", - full_name="google.cloud.videointelligence.v1p2beta1.LabelAnnotation.category_entities", - index=1, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segments", - full_name="google.cloud.videointelligence.v1p2beta1.LabelAnnotation.segments", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="frames", - full_name="google.cloud.videointelligence.v1p2beta1.LabelAnnotation.frames", - index=3, - number=4, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1794, - serialized_end=2098, -) - - -_EXPLICITCONTENTFRAME = _descriptor.Descriptor( - name="ExplicitContentFrame", - full_name="google.cloud.videointelligence.v1p2beta1.ExplicitContentFrame", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="time_offset", - full_name="google.cloud.videointelligence.v1p2beta1.ExplicitContentFrame.time_offset", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="pornography_likelihood", - full_name="google.cloud.videointelligence.v1p2beta1.ExplicitContentFrame.pornography_likelihood", - index=1, - number=2, - type=14, - cpp_type=8, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2101, - serialized_end=2257, -) - - -_EXPLICITCONTENTANNOTATION = _descriptor.Descriptor( - name="ExplicitContentAnnotation", - full_name="google.cloud.videointelligence.v1p2beta1.ExplicitContentAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="frames", - full_name="google.cloud.videointelligence.v1p2beta1.ExplicitContentAnnotation.frames", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2259, - serialized_end=2366, -) - - -_NORMALIZEDBOUNDINGBOX = _descriptor.Descriptor( - name="NormalizedBoundingBox", - full_name="google.cloud.videointelligence.v1p2beta1.NormalizedBoundingBox", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="left", - full_name="google.cloud.videointelligence.v1p2beta1.NormalizedBoundingBox.left", - index=0, - number=1, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="top", - full_name="google.cloud.videointelligence.v1p2beta1.NormalizedBoundingBox.top", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="right", - full_name="google.cloud.videointelligence.v1p2beta1.NormalizedBoundingBox.right", - index=2, - number=3, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="bottom", - full_name="google.cloud.videointelligence.v1p2beta1.NormalizedBoundingBox.bottom", - index=3, - number=4, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2368, - serialized_end=2449, -) - - -_VIDEOANNOTATIONRESULTS = _descriptor.Descriptor( - name="VideoAnnotationResults", - full_name="google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="input_uri", - full_name="google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults.input_uri", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segment_label_annotations", - full_name="google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults.segment_label_annotations", - index=1, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="shot_label_annotations", - full_name="google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults.shot_label_annotations", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="frame_label_annotations", - full_name="google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults.frame_label_annotations", - index=3, - number=4, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="shot_annotations", - full_name="google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults.shot_annotations", - index=4, - number=6, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="explicit_annotation", - full_name="google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults.explicit_annotation", - index=5, - number=7, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="text_annotations", - full_name="google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults.text_annotations", - index=6, - number=12, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="object_annotations", - full_name="google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults.object_annotations", - index=7, - number=14, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="error", - full_name="google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults.error", - index=8, - number=9, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2452, - serialized_end=3167, -) - - -_ANNOTATEVIDEORESPONSE = _descriptor.Descriptor( - name="AnnotateVideoResponse", - full_name="google.cloud.videointelligence.v1p2beta1.AnnotateVideoResponse", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="annotation_results", - full_name="google.cloud.videointelligence.v1p2beta1.AnnotateVideoResponse.annotation_results", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3169, - serialized_end=3286, -) - - -_VIDEOANNOTATIONPROGRESS = _descriptor.Descriptor( - name="VideoAnnotationProgress", - full_name="google.cloud.videointelligence.v1p2beta1.VideoAnnotationProgress", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="input_uri", - full_name="google.cloud.videointelligence.v1p2beta1.VideoAnnotationProgress.input_uri", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="progress_percent", - full_name="google.cloud.videointelligence.v1p2beta1.VideoAnnotationProgress.progress_percent", - index=1, - number=2, - type=5, - cpp_type=1, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="start_time", - full_name="google.cloud.videointelligence.v1p2beta1.VideoAnnotationProgress.start_time", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="update_time", - full_name="google.cloud.videointelligence.v1p2beta1.VideoAnnotationProgress.update_time", - index=3, - number=4, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3289, - serialized_end=3456, -) - - -_ANNOTATEVIDEOPROGRESS = _descriptor.Descriptor( - name="AnnotateVideoProgress", - full_name="google.cloud.videointelligence.v1p2beta1.AnnotateVideoProgress", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="annotation_progress", - full_name="google.cloud.videointelligence.v1p2beta1.AnnotateVideoProgress.annotation_progress", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3458, - serialized_end=3577, -) - - -_NORMALIZEDVERTEX = _descriptor.Descriptor( - name="NormalizedVertex", - full_name="google.cloud.videointelligence.v1p2beta1.NormalizedVertex", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="x", - full_name="google.cloud.videointelligence.v1p2beta1.NormalizedVertex.x", - index=0, - number=1, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="y", - full_name="google.cloud.videointelligence.v1p2beta1.NormalizedVertex.y", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3579, - serialized_end=3619, -) - - -_NORMALIZEDBOUNDINGPOLY = _descriptor.Descriptor( - name="NormalizedBoundingPoly", - full_name="google.cloud.videointelligence.v1p2beta1.NormalizedBoundingPoly", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="vertices", - full_name="google.cloud.videointelligence.v1p2beta1.NormalizedBoundingPoly.vertices", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3621, - serialized_end=3723, -) - - -_TEXTSEGMENT = _descriptor.Descriptor( - name="TextSegment", - full_name="google.cloud.videointelligence.v1p2beta1.TextSegment", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="segment", - full_name="google.cloud.videointelligence.v1p2beta1.TextSegment.segment", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1p2beta1.TextSegment.confidence", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="frames", - full_name="google.cloud.videointelligence.v1p2beta1.TextSegment.frames", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3726, - serialized_end=3901, -) - - -_TEXTFRAME = _descriptor.Descriptor( - name="TextFrame", - full_name="google.cloud.videointelligence.v1p2beta1.TextFrame", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="rotated_bounding_box", - full_name="google.cloud.videointelligence.v1p2beta1.TextFrame.rotated_bounding_box", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="time_offset", - full_name="google.cloud.videointelligence.v1p2beta1.TextFrame.time_offset", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3904, - serialized_end=4059, -) - - -_TEXTANNOTATION = _descriptor.Descriptor( - name="TextAnnotation", - full_name="google.cloud.videointelligence.v1p2beta1.TextAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="text", - full_name="google.cloud.videointelligence.v1p2beta1.TextAnnotation.text", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segments", - full_name="google.cloud.videointelligence.v1p2beta1.TextAnnotation.segments", - index=1, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=4061, - serialized_end=4164, -) - - -_OBJECTTRACKINGFRAME = _descriptor.Descriptor( - name="ObjectTrackingFrame", - full_name="google.cloud.videointelligence.v1p2beta1.ObjectTrackingFrame", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="normalized_bounding_box", - full_name="google.cloud.videointelligence.v1p2beta1.ObjectTrackingFrame.normalized_bounding_box", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="time_offset", - full_name="google.cloud.videointelligence.v1p2beta1.ObjectTrackingFrame.time_offset", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=4167, - serialized_end=4334, -) - - -_OBJECTTRACKINGANNOTATION = _descriptor.Descriptor( - name="ObjectTrackingAnnotation", - full_name="google.cloud.videointelligence.v1p2beta1.ObjectTrackingAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="entity", - full_name="google.cloud.videointelligence.v1p2beta1.ObjectTrackingAnnotation.entity", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1p2beta1.ObjectTrackingAnnotation.confidence", - index=1, - number=4, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="frames", - full_name="google.cloud.videointelligence.v1p2beta1.ObjectTrackingAnnotation.frames", - index=2, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segment", - full_name="google.cloud.videointelligence.v1p2beta1.ObjectTrackingAnnotation.segment", - index=3, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=4337, - serialized_end=4601, -) - -_ANNOTATEVIDEOREQUEST.fields_by_name["features"].enum_type = _FEATURE -_ANNOTATEVIDEOREQUEST.fields_by_name["video_context"].message_type = _VIDEOCONTEXT -_VIDEOCONTEXT.fields_by_name["segments"].message_type = _VIDEOSEGMENT -_VIDEOCONTEXT.fields_by_name[ - "label_detection_config" -].message_type = _LABELDETECTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "shot_change_detection_config" -].message_type = _SHOTCHANGEDETECTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "explicit_content_detection_config" -].message_type = _EXPLICITCONTENTDETECTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "text_detection_config" -].message_type = _TEXTDETECTIONCONFIG -_LABELDETECTIONCONFIG.fields_by_name[ - "label_detection_mode" -].enum_type = _LABELDETECTIONMODE -_VIDEOSEGMENT.fields_by_name[ - "start_time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_VIDEOSEGMENT.fields_by_name[ - "end_time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_LABELSEGMENT.fields_by_name["segment"].message_type = _VIDEOSEGMENT -_LABELFRAME.fields_by_name[ - "time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_LABELANNOTATION.fields_by_name["entity"].message_type = _ENTITY -_LABELANNOTATION.fields_by_name["category_entities"].message_type = _ENTITY -_LABELANNOTATION.fields_by_name["segments"].message_type = _LABELSEGMENT -_LABELANNOTATION.fields_by_name["frames"].message_type = _LABELFRAME -_EXPLICITCONTENTFRAME.fields_by_name[ - "time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_EXPLICITCONTENTFRAME.fields_by_name["pornography_likelihood"].enum_type = _LIKELIHOOD -_EXPLICITCONTENTANNOTATION.fields_by_name["frames"].message_type = _EXPLICITCONTENTFRAME -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "segment_label_annotations" -].message_type = _LABELANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "shot_label_annotations" -].message_type = _LABELANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "frame_label_annotations" -].message_type = _LABELANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name["shot_annotations"].message_type = _VIDEOSEGMENT -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "explicit_annotation" -].message_type = _EXPLICITCONTENTANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "text_annotations" -].message_type = _TEXTANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "object_annotations" -].message_type = _OBJECTTRACKINGANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "error" -].message_type = google_dot_rpc_dot_status__pb2._STATUS -_ANNOTATEVIDEORESPONSE.fields_by_name[ - "annotation_results" -].message_type = _VIDEOANNOTATIONRESULTS -_VIDEOANNOTATIONPROGRESS.fields_by_name[ - "start_time" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -_VIDEOANNOTATIONPROGRESS.fields_by_name[ - "update_time" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -_ANNOTATEVIDEOPROGRESS.fields_by_name[ - "annotation_progress" -].message_type = _VIDEOANNOTATIONPROGRESS -_NORMALIZEDBOUNDINGPOLY.fields_by_name["vertices"].message_type = _NORMALIZEDVERTEX -_TEXTSEGMENT.fields_by_name["segment"].message_type = _VIDEOSEGMENT -_TEXTSEGMENT.fields_by_name["frames"].message_type = _TEXTFRAME -_TEXTFRAME.fields_by_name["rotated_bounding_box"].message_type = _NORMALIZEDBOUNDINGPOLY -_TEXTFRAME.fields_by_name[ - "time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_TEXTANNOTATION.fields_by_name["segments"].message_type = _TEXTSEGMENT -_OBJECTTRACKINGFRAME.fields_by_name[ - "normalized_bounding_box" -].message_type = _NORMALIZEDBOUNDINGBOX -_OBJECTTRACKINGFRAME.fields_by_name[ - "time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_OBJECTTRACKINGANNOTATION.fields_by_name["entity"].message_type = _ENTITY -_OBJECTTRACKINGANNOTATION.fields_by_name["frames"].message_type = _OBJECTTRACKINGFRAME -_OBJECTTRACKINGANNOTATION.fields_by_name["segment"].message_type = _VIDEOSEGMENT -DESCRIPTOR.message_types_by_name["AnnotateVideoRequest"] = _ANNOTATEVIDEOREQUEST -DESCRIPTOR.message_types_by_name["VideoContext"] = _VIDEOCONTEXT -DESCRIPTOR.message_types_by_name["LabelDetectionConfig"] = _LABELDETECTIONCONFIG -DESCRIPTOR.message_types_by_name[ - "ShotChangeDetectionConfig" -] = _SHOTCHANGEDETECTIONCONFIG -DESCRIPTOR.message_types_by_name[ - "ExplicitContentDetectionConfig" -] = _EXPLICITCONTENTDETECTIONCONFIG -DESCRIPTOR.message_types_by_name["TextDetectionConfig"] = _TEXTDETECTIONCONFIG -DESCRIPTOR.message_types_by_name["VideoSegment"] = _VIDEOSEGMENT -DESCRIPTOR.message_types_by_name["LabelSegment"] = _LABELSEGMENT -DESCRIPTOR.message_types_by_name["LabelFrame"] = _LABELFRAME -DESCRIPTOR.message_types_by_name["Entity"] = _ENTITY -DESCRIPTOR.message_types_by_name["LabelAnnotation"] = _LABELANNOTATION -DESCRIPTOR.message_types_by_name["ExplicitContentFrame"] = _EXPLICITCONTENTFRAME -DESCRIPTOR.message_types_by_name[ - "ExplicitContentAnnotation" -] = _EXPLICITCONTENTANNOTATION -DESCRIPTOR.message_types_by_name["NormalizedBoundingBox"] = _NORMALIZEDBOUNDINGBOX -DESCRIPTOR.message_types_by_name["VideoAnnotationResults"] = _VIDEOANNOTATIONRESULTS -DESCRIPTOR.message_types_by_name["AnnotateVideoResponse"] = _ANNOTATEVIDEORESPONSE -DESCRIPTOR.message_types_by_name["VideoAnnotationProgress"] = _VIDEOANNOTATIONPROGRESS -DESCRIPTOR.message_types_by_name["AnnotateVideoProgress"] = _ANNOTATEVIDEOPROGRESS -DESCRIPTOR.message_types_by_name["NormalizedVertex"] = _NORMALIZEDVERTEX -DESCRIPTOR.message_types_by_name["NormalizedBoundingPoly"] = _NORMALIZEDBOUNDINGPOLY -DESCRIPTOR.message_types_by_name["TextSegment"] = _TEXTSEGMENT -DESCRIPTOR.message_types_by_name["TextFrame"] = _TEXTFRAME -DESCRIPTOR.message_types_by_name["TextAnnotation"] = _TEXTANNOTATION -DESCRIPTOR.message_types_by_name["ObjectTrackingFrame"] = _OBJECTTRACKINGFRAME -DESCRIPTOR.message_types_by_name["ObjectTrackingAnnotation"] = _OBJECTTRACKINGANNOTATION -DESCRIPTOR.enum_types_by_name["Feature"] = _FEATURE -DESCRIPTOR.enum_types_by_name["LabelDetectionMode"] = _LABELDETECTIONMODE -DESCRIPTOR.enum_types_by_name["Likelihood"] = _LIKELIHOOD -_sym_db.RegisterFileDescriptor(DESCRIPTOR) - -AnnotateVideoRequest = _reflection.GeneratedProtocolMessageType( - "AnnotateVideoRequest", - (_message.Message,), - { - "DESCRIPTOR": _ANNOTATEVIDEOREQUEST, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Video annotation request. - - - Attributes: - input_uri: - Input video location. Currently, only `Google Cloud Storage - `__ URIs are supported, - which must be specified in the following format: - ``gs://bucket-id/object-id`` (other URI formats return [google - .rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]) - . For more information, see `Request URIs - `__. - A video URI may include wildcards in ``object-id``, and thus - identify multiple videos. Supported wildcards: ’*’ to match 0 - or more characters; ‘?’ to match 1 character. If unset, the - input video should be embedded in the request as - ``input_content``. If set, ``input_content`` should be unset. - input_content: - The video data bytes. If unset, the input video(s) should be - specified via ``input_uri``. If set, ``input_uri`` should be - unset. - features: - Required. Requested video annotation features. - video_context: - Additional video context and/or feature-specific parameters. - output_uri: - Optional. Location where the output (in JSON format) should be - stored. Currently, only `Google Cloud Storage - `__ URIs are supported, - which must be specified in the following format: - ``gs://bucket-id/object-id`` (other URI formats return [google - .rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]) - . For more information, see `Request URIs - `__. - location_id: - Optional. Cloud region where annotation should take place. - Supported cloud regions: ``us-east1``, ``us-west1``, ``europe- - west1``, ``asia-east1``. If no region is specified, a region - will be determined based on video file location. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.AnnotateVideoRequest) - }, -) -_sym_db.RegisterMessage(AnnotateVideoRequest) - -VideoContext = _reflection.GeneratedProtocolMessageType( - "VideoContext", - (_message.Message,), - { - "DESCRIPTOR": _VIDEOCONTEXT, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Video context and/or feature-specific parameters. - - - Attributes: - segments: - Video segments to annotate. The segments may overlap and are - not required to be contiguous or span the whole video. If - unspecified, each video is treated as a single segment. - label_detection_config: - Config for LABEL_DETECTION. - shot_change_detection_config: - Config for SHOT_CHANGE_DETECTION. - explicit_content_detection_config: - Config for EXPLICIT_CONTENT_DETECTION. - text_detection_config: - Config for TEXT_DETECTION. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.VideoContext) - }, -) -_sym_db.RegisterMessage(VideoContext) - -LabelDetectionConfig = _reflection.GeneratedProtocolMessageType( - "LabelDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _LABELDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Config for LABEL_DETECTION. - - - Attributes: - label_detection_mode: - What labels should be detected with LABEL_DETECTION, in - addition to video-level labels or segment-level labels. If - unspecified, defaults to ``SHOT_MODE``. - stationary_camera: - Whether the video has been shot from a stationary (i.e. non- - moving) camera. When set to true, might improve detection - accuracy for moving objects. Should be used with - ``SHOT_AND_FRAME_MODE`` enabled. - model: - Model to use for label detection. Supported values: - “builtin/stable” (the default if unset) and “builtin/latest”. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.LabelDetectionConfig) - }, -) -_sym_db.RegisterMessage(LabelDetectionConfig) - -ShotChangeDetectionConfig = _reflection.GeneratedProtocolMessageType( - "ShotChangeDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _SHOTCHANGEDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Config for SHOT_CHANGE_DETECTION. - - - Attributes: - model: - Model to use for shot change detection. Supported values: - “builtin/stable” (the default if unset) and “builtin/latest”. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.ShotChangeDetectionConfig) - }, -) -_sym_db.RegisterMessage(ShotChangeDetectionConfig) - -ExplicitContentDetectionConfig = _reflection.GeneratedProtocolMessageType( - "ExplicitContentDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _EXPLICITCONTENTDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Config for EXPLICIT_CONTENT_DETECTION. - - - Attributes: - model: - Model to use for explicit content detection. Supported values: - “builtin/stable” (the default if unset) and “builtin/latest”. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.ExplicitContentDetectionConfig) - }, -) -_sym_db.RegisterMessage(ExplicitContentDetectionConfig) - -TextDetectionConfig = _reflection.GeneratedProtocolMessageType( - "TextDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _TEXTDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Config for TEXT_DETECTION. - - - Attributes: - language_hints: - Language hint can be specified if the language to be detected - is known a priori. It can increase the accuracy of the - detection. Language hint must be language code in BCP-47 - format. Automatic language detection is performed if no hint - is provided. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.TextDetectionConfig) - }, -) -_sym_db.RegisterMessage(TextDetectionConfig) - -VideoSegment = _reflection.GeneratedProtocolMessageType( - "VideoSegment", - (_message.Message,), - { - "DESCRIPTOR": _VIDEOSEGMENT, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Video segment. - - - Attributes: - start_time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the start of the segment (inclusive). - end_time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the end of the segment (inclusive). - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.VideoSegment) - }, -) -_sym_db.RegisterMessage(VideoSegment) - -LabelSegment = _reflection.GeneratedProtocolMessageType( - "LabelSegment", - (_message.Message,), - { - "DESCRIPTOR": _LABELSEGMENT, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Video segment level annotation results for label detection. - - - Attributes: - segment: - Video segment where a label was detected. - confidence: - Confidence that the label is accurate. Range: [0, 1]. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.LabelSegment) - }, -) -_sym_db.RegisterMessage(LabelSegment) - -LabelFrame = _reflection.GeneratedProtocolMessageType( - "LabelFrame", - (_message.Message,), - { - "DESCRIPTOR": _LABELFRAME, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Video frame level annotation results for label detection. - - - Attributes: - time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the video frame for this location. - confidence: - Confidence that the label is accurate. Range: [0, 1]. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.LabelFrame) - }, -) -_sym_db.RegisterMessage(LabelFrame) - -Entity = _reflection.GeneratedProtocolMessageType( - "Entity", - (_message.Message,), - { - "DESCRIPTOR": _ENTITY, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Detected entity from video analysis. - - - Attributes: - entity_id: - Opaque entity ID. Some IDs may be available in `Google - Knowledge Graph Search API - `__. - description: - Textual description, e.g. ``Fixed-gear bicycle``. - language_code: - Language code for ``description`` in BCP-47 format. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.Entity) - }, -) -_sym_db.RegisterMessage(Entity) - -LabelAnnotation = _reflection.GeneratedProtocolMessageType( - "LabelAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _LABELANNOTATION, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Label annotation. - - - Attributes: - entity: - Detected entity. - category_entities: - Common categories for the detected entity. E.g. when the label - is ``Terrier`` the category is likely ``dog``. And in some - cases there might be more than one categories e.g. ``Terrier`` - could also be a ``pet``. - segments: - All video segments where a label was detected. - frames: - All video frames where a label was detected. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.LabelAnnotation) - }, -) -_sym_db.RegisterMessage(LabelAnnotation) - -ExplicitContentFrame = _reflection.GeneratedProtocolMessageType( - "ExplicitContentFrame", - (_message.Message,), - { - "DESCRIPTOR": _EXPLICITCONTENTFRAME, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Video frame level annotation results for explicit content. - - - Attributes: - time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the video frame for this location. - pornography_likelihood: - Likelihood of the pornography content.. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.ExplicitContentFrame) - }, -) -_sym_db.RegisterMessage(ExplicitContentFrame) - -ExplicitContentAnnotation = _reflection.GeneratedProtocolMessageType( - "ExplicitContentAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _EXPLICITCONTENTANNOTATION, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Explicit content annotation (based on per-frame visual signals only). - If no explicit content has been detected in a frame, no annotations - are present for that frame. - - - Attributes: - frames: - All video frames where explicit content was detected. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.ExplicitContentAnnotation) - }, -) -_sym_db.RegisterMessage(ExplicitContentAnnotation) - -NormalizedBoundingBox = _reflection.GeneratedProtocolMessageType( - "NormalizedBoundingBox", - (_message.Message,), - { - "DESCRIPTOR": _NORMALIZEDBOUNDINGBOX, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Normalized bounding box. The normalized vertex coordinates are - relative to the original image. Range: [0, 1]. - - - Attributes: - left: - Left X coordinate. - top: - Top Y coordinate. - right: - Right X coordinate. - bottom: - Bottom Y coordinate. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.NormalizedBoundingBox) - }, -) -_sym_db.RegisterMessage(NormalizedBoundingBox) - -VideoAnnotationResults = _reflection.GeneratedProtocolMessageType( - "VideoAnnotationResults", - (_message.Message,), - { - "DESCRIPTOR": _VIDEOANNOTATIONRESULTS, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Annotation results for a single video. - - - Attributes: - input_uri: - Video file location in `Google Cloud Storage - `__. - segment_label_annotations: - Label annotations on video level or user specified segment - level. There is exactly one element for each unique label. - shot_label_annotations: - Label annotations on shot level. There is exactly one element - for each unique label. - frame_label_annotations: - Label annotations on frame level. There is exactly one element - for each unique label. - shot_annotations: - Shot annotations. Each shot is represented as a video segment. - explicit_annotation: - Explicit content annotation. - text_annotations: - OCR text detection and tracking. Annotations for list of - detected text snippets. Each will have list of frame - information associated with it. - object_annotations: - Annotations for list of objects detected and tracked in video. - error: - If set, indicates an error. Note that for a single - ``AnnotateVideoRequest`` some videos may succeed and some may - fail. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.VideoAnnotationResults) - }, -) -_sym_db.RegisterMessage(VideoAnnotationResults) - -AnnotateVideoResponse = _reflection.GeneratedProtocolMessageType( - "AnnotateVideoResponse", - (_message.Message,), - { - "DESCRIPTOR": _ANNOTATEVIDEORESPONSE, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Video annotation response. Included in the ``response`` field of the - ``Operation`` returned by the ``GetOperation`` call of the - ``google::longrunning::Operations`` service. - - - Attributes: - annotation_results: - Annotation results for all videos specified in - ``AnnotateVideoRequest``. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.AnnotateVideoResponse) - }, -) -_sym_db.RegisterMessage(AnnotateVideoResponse) - -VideoAnnotationProgress = _reflection.GeneratedProtocolMessageType( - "VideoAnnotationProgress", - (_message.Message,), - { - "DESCRIPTOR": _VIDEOANNOTATIONPROGRESS, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Annotation progress for a single video. - - - Attributes: - input_uri: - Video file location in `Google Cloud Storage - `__. - progress_percent: - Approximate percentage processed thus far. Guaranteed to be - 100 when fully processed. - start_time: - Time when the request was received. - update_time: - Time of the most recent update. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.VideoAnnotationProgress) - }, -) -_sym_db.RegisterMessage(VideoAnnotationProgress) - -AnnotateVideoProgress = _reflection.GeneratedProtocolMessageType( - "AnnotateVideoProgress", - (_message.Message,), - { - "DESCRIPTOR": _ANNOTATEVIDEOPROGRESS, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Video annotation progress. Included in the ``metadata`` field of the - ``Operation`` returned by the ``GetOperation`` call of the - ``google::longrunning::Operations`` service. - - - Attributes: - annotation_progress: - Progress metadata for all videos specified in - ``AnnotateVideoRequest``. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.AnnotateVideoProgress) - }, -) -_sym_db.RegisterMessage(AnnotateVideoProgress) - -NormalizedVertex = _reflection.GeneratedProtocolMessageType( - "NormalizedVertex", - (_message.Message,), - { - "DESCRIPTOR": _NORMALIZEDVERTEX, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """X coordinate. - - - Attributes: - y: - Y coordinate. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.NormalizedVertex) - }, -) -_sym_db.RegisterMessage(NormalizedVertex) - -NormalizedBoundingPoly = _reflection.GeneratedProtocolMessageType( - "NormalizedBoundingPoly", - (_message.Message,), - { - "DESCRIPTOR": _NORMALIZEDBOUNDINGPOLY, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Normalized bounding polygon for text (that might not be aligned with - axis). Contains list of the corner points in clockwise order starting - from top-left corner. For example, for a rectangular bounding box: - When the text is horizontal it might look like: 0—-1 \| \| 3—-2 When - it’s clockwise rotated 180 degrees around the top-left corner it - becomes: 2—-3 \| \| 1—-0 and the vertex order will still be (0, 1, 2, - 3). Note that values can be less than 0, or greater than 1 due to - trignometric calculations for location of the box. - - - Attributes: - vertices: - Normalized vertices of the bounding polygon. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.NormalizedBoundingPoly) - }, -) -_sym_db.RegisterMessage(NormalizedBoundingPoly) - -TextSegment = _reflection.GeneratedProtocolMessageType( - "TextSegment", - (_message.Message,), - { - "DESCRIPTOR": _TEXTSEGMENT, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Video segment level annotation results for text detection. - - - Attributes: - segment: - Video segment where a text snippet was detected. - confidence: - Confidence for the track of detected text. It is calculated as - the highest over all frames where OCR detected text appears. - frames: - Information related to the frames where OCR detected text - appears. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.TextSegment) - }, -) -_sym_db.RegisterMessage(TextSegment) - -TextFrame = _reflection.GeneratedProtocolMessageType( - "TextFrame", - (_message.Message,), - { - "DESCRIPTOR": _TEXTFRAME, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Video frame level annotation results for text annotation (OCR). - Contains information regarding timestamp and bounding box locations - for the frames containing detected OCR text snippets. - - - Attributes: - rotated_bounding_box: - Bounding polygon of the detected text for this frame. - time_offset: - Timestamp of this frame. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.TextFrame) - }, -) -_sym_db.RegisterMessage(TextFrame) - -TextAnnotation = _reflection.GeneratedProtocolMessageType( - "TextAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _TEXTANNOTATION, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Annotations related to one detected OCR text snippet. This will - contain the corresponding text, confidence value, and frame level - information for each detection. - - - Attributes: - text: - The detected text. - segments: - All video segments where OCR detected text appears. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.TextAnnotation) - }, -) -_sym_db.RegisterMessage(TextAnnotation) - -ObjectTrackingFrame = _reflection.GeneratedProtocolMessageType( - "ObjectTrackingFrame", - (_message.Message,), - { - "DESCRIPTOR": _OBJECTTRACKINGFRAME, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Video frame level annotations for object detection and tracking. This - field stores per frame location, time offset, and confidence. - - - Attributes: - normalized_bounding_box: - The normalized bounding box location of this object track for - the frame. - time_offset: - The timestamp of the frame in microseconds. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.ObjectTrackingFrame) - }, -) -_sym_db.RegisterMessage(ObjectTrackingFrame) - -ObjectTrackingAnnotation = _reflection.GeneratedProtocolMessageType( - "ObjectTrackingAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _OBJECTTRACKINGANNOTATION, - "__module__": "google.cloud.videointelligence_v1p2beta1.proto.video_intelligence_pb2", - "__doc__": """Annotations corresponding to one tracked object. - - - Attributes: - entity: - Entity to specify the object category that this track is - labeled as. - confidence: - Object category’s labeling confidence of this track. - frames: - Information corresponding to all frames where this object - track appears. - segment: - Each object track corresponds to one video segment where it - appears. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p2beta1.ObjectTrackingAnnotation) - }, -) -_sym_db.RegisterMessage(ObjectTrackingAnnotation) - - -DESCRIPTOR._options = None -_ANNOTATEVIDEOREQUEST.fields_by_name["features"]._options = None -_ANNOTATEVIDEOREQUEST.fields_by_name["output_uri"]._options = None -_ANNOTATEVIDEOREQUEST.fields_by_name["location_id"]._options = None - -_VIDEOINTELLIGENCESERVICE = _descriptor.ServiceDescriptor( - name="VideoIntelligenceService", - full_name="google.cloud.videointelligence.v1p2beta1.VideoIntelligenceService", - file=DESCRIPTOR, - index=0, - serialized_options=b"\312A videointelligence.googleapis.com\322A.https://www.googleapis.com/auth/cloud-platform", - create_key=_descriptor._internal_create_key, - serialized_start=4996, - serialized_end=5330, - methods=[ - _descriptor.MethodDescriptor( - name="AnnotateVideo", - full_name="google.cloud.videointelligence.v1p2beta1.VideoIntelligenceService.AnnotateVideo", - index=0, - containing_service=None, - input_type=_ANNOTATEVIDEOREQUEST, - output_type=google_dot_longrunning_dot_operations__pb2._OPERATION, - serialized_options=b'\202\323\344\223\002\037"\032/v1p2beta1/videos:annotate:\001*\332A\022input_uri,features\312A.\n\025AnnotateVideoResponse\022\025AnnotateVideoProgress', - create_key=_descriptor._internal_create_key, - ), - ], -) -_sym_db.RegisterServiceDescriptor(_VIDEOINTELLIGENCESERVICE) - -DESCRIPTOR.services_by_name["VideoIntelligenceService"] = _VIDEOINTELLIGENCESERVICE - -# @@protoc_insertion_point(module_scope) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/proto/video_intelligence_pb2_grpc.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/proto/video_intelligence_pb2_grpc.py deleted file mode 100644 index 09c444df1418..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/proto/video_intelligence_pb2_grpc.py +++ /dev/null @@ -1,92 +0,0 @@ -# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! -"""Client and server classes corresponding to protobuf-defined services.""" -import grpc - -from google.cloud.videointelligence_v1p2beta1.proto import ( - video_intelligence_pb2 as google_dot_cloud_dot_videointelligence__v1p2beta1_dot_proto_dot_video__intelligence__pb2, -) -from google.longrunning import ( - operations_pb2 as google_dot_longrunning_dot_operations__pb2, -) - - -class VideoIntelligenceServiceStub(object): - """Service that implements Google Cloud Video Intelligence API. - """ - - def __init__(self, channel): - """Constructor. - - Args: - channel: A grpc.Channel. - """ - self.AnnotateVideo = channel.unary_unary( - "/google.cloud.videointelligence.v1p2beta1.VideoIntelligenceService/AnnotateVideo", - request_serializer=google_dot_cloud_dot_videointelligence__v1p2beta1_dot_proto_dot_video__intelligence__pb2.AnnotateVideoRequest.SerializeToString, - response_deserializer=google_dot_longrunning_dot_operations__pb2.Operation.FromString, - ) - - -class VideoIntelligenceServiceServicer(object): - """Service that implements Google Cloud Video Intelligence API. - """ - - def AnnotateVideo(self, request, context): - """Performs asynchronous video annotation. Progress and results can be - retrieved through the `google.longrunning.Operations` interface. - `Operation.metadata` contains `AnnotateVideoProgress` (progress). - `Operation.response` contains `AnnotateVideoResponse` (results). - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - -def add_VideoIntelligenceServiceServicer_to_server(servicer, server): - rpc_method_handlers = { - "AnnotateVideo": grpc.unary_unary_rpc_method_handler( - servicer.AnnotateVideo, - request_deserializer=google_dot_cloud_dot_videointelligence__v1p2beta1_dot_proto_dot_video__intelligence__pb2.AnnotateVideoRequest.FromString, - response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString, - ), - } - generic_handler = grpc.method_handlers_generic_handler( - "google.cloud.videointelligence.v1p2beta1.VideoIntelligenceService", - rpc_method_handlers, - ) - server.add_generic_rpc_handlers((generic_handler,)) - - -# This class is part of an EXPERIMENTAL API. -class VideoIntelligenceService(object): - """Service that implements Google Cloud Video Intelligence API. - """ - - @staticmethod - def AnnotateVideo( - request, - target, - options=(), - channel_credentials=None, - call_credentials=None, - insecure=False, - compression=None, - wait_for_ready=None, - timeout=None, - metadata=None, - ): - return grpc.experimental.unary_unary( - request, - target, - "/google.cloud.videointelligence.v1p2beta1.VideoIntelligenceService/AnnotateVideo", - google_dot_cloud_dot_videointelligence__v1p2beta1_dot_proto_dot_video__intelligence__pb2.AnnotateVideoRequest.SerializeToString, - google_dot_longrunning_dot_operations__pb2.Operation.FromString, - options, - channel_credentials, - insecure, - call_credentials, - compression, - wait_for_ready, - timeout, - metadata, - ) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/py.typed b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/py.typed new file mode 100644 index 000000000000..e7fb166bf3e5 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/py.typed @@ -0,0 +1,2 @@ +# Marker file for PEP 561. +# The google-cloud-videointelligence package uses inline types. diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/__init__.py new file mode 100644 index 000000000000..42ffdf2bc43d --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/__init__.py @@ -0,0 +1,16 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/__init__.py new file mode 100644 index 000000000000..559320286ca2 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/__init__.py @@ -0,0 +1,24 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from .client import VideoIntelligenceServiceClient +from .async_client import VideoIntelligenceServiceAsyncClient + +__all__ = ( + "VideoIntelligenceServiceClient", + "VideoIntelligenceServiceAsyncClient", +) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/async_client.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/async_client.py new file mode 100644 index 000000000000..d62d470bde6e --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/async_client.py @@ -0,0 +1,270 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from collections import OrderedDict +import functools +import re +from typing import Dict, Sequence, Tuple, Type, Union +import pkg_resources + +import google.api_core.client_options as ClientOptions # type: ignore +from google.api_core import exceptions # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google.api_core import retry as retries # type: ignore +from google.auth import credentials # type: ignore +from google.oauth2 import service_account # type: ignore + +from google.api_core import operation # type: ignore +from google.api_core import operation_async # type: ignore +from google.cloud.videointelligence_v1p2beta1.types import video_intelligence + +from .transports.base import VideoIntelligenceServiceTransport, DEFAULT_CLIENT_INFO +from .transports.grpc_asyncio import VideoIntelligenceServiceGrpcAsyncIOTransport +from .client import VideoIntelligenceServiceClient + + +class VideoIntelligenceServiceAsyncClient: + """Service that implements Google Cloud Video Intelligence API.""" + + _client: VideoIntelligenceServiceClient + + DEFAULT_ENDPOINT = VideoIntelligenceServiceClient.DEFAULT_ENDPOINT + DEFAULT_MTLS_ENDPOINT = VideoIntelligenceServiceClient.DEFAULT_MTLS_ENDPOINT + + common_billing_account_path = staticmethod( + VideoIntelligenceServiceClient.common_billing_account_path + ) + parse_common_billing_account_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_billing_account_path + ) + + common_folder_path = staticmethod(VideoIntelligenceServiceClient.common_folder_path) + parse_common_folder_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_folder_path + ) + + common_organization_path = staticmethod( + VideoIntelligenceServiceClient.common_organization_path + ) + parse_common_organization_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_organization_path + ) + + common_project_path = staticmethod( + VideoIntelligenceServiceClient.common_project_path + ) + parse_common_project_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_project_path + ) + + common_location_path = staticmethod( + VideoIntelligenceServiceClient.common_location_path + ) + parse_common_location_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_location_path + ) + + from_service_account_file = VideoIntelligenceServiceClient.from_service_account_file + from_service_account_json = from_service_account_file + + @property + def transport(self) -> VideoIntelligenceServiceTransport: + """Return the transport used by the client instance. + + Returns: + VideoIntelligenceServiceTransport: The transport used by the client instance. + """ + return self._client.transport + + get_transport_class = functools.partial( + type(VideoIntelligenceServiceClient).get_transport_class, + type(VideoIntelligenceServiceClient), + ) + + def __init__( + self, + *, + credentials: credentials.Credentials = None, + transport: Union[str, VideoIntelligenceServiceTransport] = "grpc_asyncio", + client_options: ClientOptions = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the video intelligence service client. + + Args: + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + transport (Union[str, ~.VideoIntelligenceServiceTransport]): The + transport to use. If set to None, a transport is chosen + automatically. + client_options (ClientOptions): Custom options for the client. It + won't take effect if a ``transport`` instance is provided. + (1) The ``api_endpoint`` property can be used to override the + default endpoint provided by the client. GOOGLE_API_USE_MTLS_ENDPOINT + environment variable can also be used to override the endpoint: + "always" (always use the default mTLS endpoint), "never" (always + use the default regular endpoint) and "auto" (auto switch to the + default mTLS endpoint if client certificate is present, this is + the default value). However, the ``api_endpoint`` property takes + precedence if provided. + (2) If GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable + is "true", then the ``client_cert_source`` property can be used + to provide client certificate for mutual TLS transport. If + not provided, the default SSL client certificate will be used if + present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not + set, no client certificate will be used. + + Raises: + google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport + creation failed for any reason. + """ + + self._client = VideoIntelligenceServiceClient( + credentials=credentials, + transport=transport, + client_options=client_options, + client_info=client_info, + ) + + async def annotate_video( + self, + request: video_intelligence.AnnotateVideoRequest = None, + *, + input_uri: str = None, + features: Sequence[video_intelligence.Feature] = None, + retry: retries.Retry = gapic_v1.method.DEFAULT, + timeout: float = None, + metadata: Sequence[Tuple[str, str]] = (), + ) -> operation_async.AsyncOperation: + r"""Performs asynchronous video annotation. Progress and results can + be retrieved through the ``google.longrunning.Operations`` + interface. ``Operation.metadata`` contains + ``AnnotateVideoProgress`` (progress). ``Operation.response`` + contains ``AnnotateVideoResponse`` (results). + + Args: + request (:class:`~.video_intelligence.AnnotateVideoRequest`): + The request object. Video annotation request. + input_uri (:class:`str`): + Input video location. Currently, only `Google Cloud + Storage `__ URIs are + supported, which must be specified in the following + format: ``gs://bucket-id/object-id`` (other URI formats + return + [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). + For more information, see `Request + URIs `__. + A video URI may include wildcards in ``object-id``, and + thus identify multiple videos. Supported wildcards: '*' + to match 0 or more characters; '?' to match 1 character. + If unset, the input video should be embedded in the + request as ``input_content``. If set, ``input_content`` + should be unset. + This corresponds to the ``input_uri`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + features (:class:`Sequence[~.video_intelligence.Feature]`): + Required. Requested video annotation + features. + This corresponds to the ``features`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, str]]): Strings which should be + sent along with the request as metadata. + + Returns: + ~.operation_async.AsyncOperation: + An object representing a long-running operation. + + The result type for the operation will be + :class:``~.video_intelligence.AnnotateVideoResponse``: + Video annotation response. Included in the ``response`` + field of the ``Operation`` returned by the + ``GetOperation`` call of the + ``google::longrunning::Operations`` service. + + """ + # Create or coerce a protobuf request object. + # Sanity check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + has_flattened_params = any([input_uri, features]) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + request = video_intelligence.AnnotateVideoRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + + if input_uri is not None: + request.input_uri = input_uri + + if features: + request.features.extend(features) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = gapic_v1.method_async.wrap_method( + self._client._transport.annotate_video, + default_retry=retries.Retry( + initial=1.0, + maximum=120.0, + multiplier=2.5, + predicate=retries.if_exception_type( + exceptions.DeadlineExceeded, exceptions.ServiceUnavailable, + ), + ), + default_timeout=600.0, + client_info=DEFAULT_CLIENT_INFO, + ) + + # Send the request. + response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,) + + # Wrap the response in an operation future. + response = operation_async.from_gapic( + response, + self._client._transport.operations_client, + video_intelligence.AnnotateVideoResponse, + metadata_type=video_intelligence.AnnotateVideoProgress, + ) + + # Done; return the response. + return response + + +try: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=pkg_resources.get_distribution( + "google-cloud-videointelligence", + ).version, + ) +except pkg_resources.DistributionNotFound: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo() + + +__all__ = ("VideoIntelligenceServiceAsyncClient",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/client.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/client.py new file mode 100644 index 000000000000..4034262bd914 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/client.py @@ -0,0 +1,438 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from collections import OrderedDict +from distutils import util +import os +import re +from typing import Callable, Dict, Optional, Sequence, Tuple, Type, Union +import pkg_resources + +from google.api_core import client_options as client_options_lib # type: ignore +from google.api_core import exceptions # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google.api_core import retry as retries # type: ignore +from google.auth import credentials # type: ignore +from google.auth.transport import mtls # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore +from google.auth.exceptions import MutualTLSChannelError # type: ignore +from google.oauth2 import service_account # type: ignore + +from google.api_core import operation # type: ignore +from google.api_core import operation_async # type: ignore +from google.cloud.videointelligence_v1p2beta1.types import video_intelligence + +from .transports.base import VideoIntelligenceServiceTransport, DEFAULT_CLIENT_INFO +from .transports.grpc import VideoIntelligenceServiceGrpcTransport +from .transports.grpc_asyncio import VideoIntelligenceServiceGrpcAsyncIOTransport + + +class VideoIntelligenceServiceClientMeta(type): + """Metaclass for the VideoIntelligenceService client. + + This provides class-level methods for building and retrieving + support objects (e.g. transport) without polluting the client instance + objects. + """ + + _transport_registry = ( + OrderedDict() + ) # type: Dict[str, Type[VideoIntelligenceServiceTransport]] + _transport_registry["grpc"] = VideoIntelligenceServiceGrpcTransport + _transport_registry["grpc_asyncio"] = VideoIntelligenceServiceGrpcAsyncIOTransport + + def get_transport_class( + cls, label: str = None, + ) -> Type[VideoIntelligenceServiceTransport]: + """Return an appropriate transport class. + + Args: + label: The name of the desired transport. If none is + provided, then the first transport in the registry is used. + + Returns: + The transport class to use. + """ + # If a specific transport is requested, return that one. + if label: + return cls._transport_registry[label] + + # No transport is requested; return the default (that is, the first one + # in the dictionary). + return next(iter(cls._transport_registry.values())) + + +class VideoIntelligenceServiceClient(metaclass=VideoIntelligenceServiceClientMeta): + """Service that implements Google Cloud Video Intelligence API.""" + + @staticmethod + def _get_default_mtls_endpoint(api_endpoint): + """Convert api endpoint to mTLS endpoint. + Convert "*.sandbox.googleapis.com" and "*.googleapis.com" to + "*.mtls.sandbox.googleapis.com" and "*.mtls.googleapis.com" respectively. + Args: + api_endpoint (Optional[str]): the api endpoint to convert. + Returns: + str: converted mTLS api endpoint. + """ + if not api_endpoint: + return api_endpoint + + mtls_endpoint_re = re.compile( + r"(?P[^.]+)(?P\.mtls)?(?P\.sandbox)?(?P\.googleapis\.com)?" + ) + + m = mtls_endpoint_re.match(api_endpoint) + name, mtls, sandbox, googledomain = m.groups() + if mtls or not googledomain: + return api_endpoint + + if sandbox: + return api_endpoint.replace( + "sandbox.googleapis.com", "mtls.sandbox.googleapis.com" + ) + + return api_endpoint.replace(".googleapis.com", ".mtls.googleapis.com") + + DEFAULT_ENDPOINT = "videointelligence.googleapis.com" + DEFAULT_MTLS_ENDPOINT = _get_default_mtls_endpoint.__func__( # type: ignore + DEFAULT_ENDPOINT + ) + + @classmethod + def from_service_account_file(cls, filename: str, *args, **kwargs): + """Creates an instance of this client using the provided credentials + file. + + Args: + filename (str): The path to the service account private key json + file. + args: Additional arguments to pass to the constructor. + kwargs: Additional arguments to pass to the constructor. + + Returns: + {@api.name}: The constructed client. + """ + credentials = service_account.Credentials.from_service_account_file(filename) + kwargs["credentials"] = credentials + return cls(*args, **kwargs) + + from_service_account_json = from_service_account_file + + @property + def transport(self) -> VideoIntelligenceServiceTransport: + """Return the transport used by the client instance. + + Returns: + VideoIntelligenceServiceTransport: The transport used by the client instance. + """ + return self._transport + + @staticmethod + def common_billing_account_path(billing_account: str,) -> str: + """Return a fully-qualified billing_account string.""" + return "billingAccounts/{billing_account}".format( + billing_account=billing_account, + ) + + @staticmethod + def parse_common_billing_account_path(path: str) -> Dict[str, str]: + """Parse a billing_account path into its component segments.""" + m = re.match(r"^billingAccounts/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_folder_path(folder: str,) -> str: + """Return a fully-qualified folder string.""" + return "folders/{folder}".format(folder=folder,) + + @staticmethod + def parse_common_folder_path(path: str) -> Dict[str, str]: + """Parse a folder path into its component segments.""" + m = re.match(r"^folders/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_organization_path(organization: str,) -> str: + """Return a fully-qualified organization string.""" + return "organizations/{organization}".format(organization=organization,) + + @staticmethod + def parse_common_organization_path(path: str) -> Dict[str, str]: + """Parse a organization path into its component segments.""" + m = re.match(r"^organizations/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_project_path(project: str,) -> str: + """Return a fully-qualified project string.""" + return "projects/{project}".format(project=project,) + + @staticmethod + def parse_common_project_path(path: str) -> Dict[str, str]: + """Parse a project path into its component segments.""" + m = re.match(r"^projects/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_location_path(project: str, location: str,) -> str: + """Return a fully-qualified location string.""" + return "projects/{project}/locations/{location}".format( + project=project, location=location, + ) + + @staticmethod + def parse_common_location_path(path: str) -> Dict[str, str]: + """Parse a location path into its component segments.""" + m = re.match(r"^projects/(?P.+?)/locations/(?P.+?)$", path) + return m.groupdict() if m else {} + + def __init__( + self, + *, + credentials: Optional[credentials.Credentials] = None, + transport: Union[str, VideoIntelligenceServiceTransport, None] = None, + client_options: Optional[client_options_lib.ClientOptions] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the video intelligence service client. + + Args: + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + transport (Union[str, ~.VideoIntelligenceServiceTransport]): The + transport to use. If set to None, a transport is chosen + automatically. + client_options (client_options_lib.ClientOptions): Custom options for the + client. It won't take effect if a ``transport`` instance is provided. + (1) The ``api_endpoint`` property can be used to override the + default endpoint provided by the client. GOOGLE_API_USE_MTLS_ENDPOINT + environment variable can also be used to override the endpoint: + "always" (always use the default mTLS endpoint), "never" (always + use the default regular endpoint) and "auto" (auto switch to the + default mTLS endpoint if client certificate is present, this is + the default value). However, the ``api_endpoint`` property takes + precedence if provided. + (2) If GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable + is "true", then the ``client_cert_source`` property can be used + to provide client certificate for mutual TLS transport. If + not provided, the default SSL client certificate will be used if + present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not + set, no client certificate will be used. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If mutual TLS transport + creation failed for any reason. + """ + if isinstance(client_options, dict): + client_options = client_options_lib.from_dict(client_options) + if client_options is None: + client_options = client_options_lib.ClientOptions() + + # Create SSL credentials for mutual TLS if needed. + use_client_cert = bool( + util.strtobool(os.getenv("GOOGLE_API_USE_CLIENT_CERTIFICATE", "false")) + ) + + ssl_credentials = None + is_mtls = False + if use_client_cert: + if client_options.client_cert_source: + import grpc # type: ignore + + cert, key = client_options.client_cert_source() + ssl_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + is_mtls = True + else: + creds = SslCredentials() + is_mtls = creds.is_mtls + ssl_credentials = creds.ssl_credentials if is_mtls else None + + # Figure out which api endpoint to use. + if client_options.api_endpoint is not None: + api_endpoint = client_options.api_endpoint + else: + use_mtls_env = os.getenv("GOOGLE_API_USE_MTLS_ENDPOINT", "auto") + if use_mtls_env == "never": + api_endpoint = self.DEFAULT_ENDPOINT + elif use_mtls_env == "always": + api_endpoint = self.DEFAULT_MTLS_ENDPOINT + elif use_mtls_env == "auto": + api_endpoint = ( + self.DEFAULT_MTLS_ENDPOINT if is_mtls else self.DEFAULT_ENDPOINT + ) + else: + raise MutualTLSChannelError( + "Unsupported GOOGLE_API_USE_MTLS_ENDPOINT value. Accepted values: never, auto, always" + ) + + # Save or instantiate the transport. + # Ordinarily, we provide the transport, but allowing a custom transport + # instance provides an extensibility point for unusual situations. + if isinstance(transport, VideoIntelligenceServiceTransport): + # transport is a VideoIntelligenceServiceTransport instance. + if credentials or client_options.credentials_file: + raise ValueError( + "When providing a transport instance, " + "provide its credentials directly." + ) + if client_options.scopes: + raise ValueError( + "When providing a transport instance, " + "provide its scopes directly." + ) + self._transport = transport + else: + Transport = type(self).get_transport_class(transport) + self._transport = Transport( + credentials=credentials, + credentials_file=client_options.credentials_file, + host=api_endpoint, + scopes=client_options.scopes, + ssl_channel_credentials=ssl_credentials, + quota_project_id=client_options.quota_project_id, + client_info=client_info, + ) + + def annotate_video( + self, + request: video_intelligence.AnnotateVideoRequest = None, + *, + input_uri: str = None, + features: Sequence[video_intelligence.Feature] = None, + retry: retries.Retry = gapic_v1.method.DEFAULT, + timeout: float = None, + metadata: Sequence[Tuple[str, str]] = (), + ) -> operation.Operation: + r"""Performs asynchronous video annotation. Progress and results can + be retrieved through the ``google.longrunning.Operations`` + interface. ``Operation.metadata`` contains + ``AnnotateVideoProgress`` (progress). ``Operation.response`` + contains ``AnnotateVideoResponse`` (results). + + Args: + request (:class:`~.video_intelligence.AnnotateVideoRequest`): + The request object. Video annotation request. + input_uri (:class:`str`): + Input video location. Currently, only `Google Cloud + Storage `__ URIs are + supported, which must be specified in the following + format: ``gs://bucket-id/object-id`` (other URI formats + return + [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). + For more information, see `Request + URIs `__. + A video URI may include wildcards in ``object-id``, and + thus identify multiple videos. Supported wildcards: '*' + to match 0 or more characters; '?' to match 1 character. + If unset, the input video should be embedded in the + request as ``input_content``. If set, ``input_content`` + should be unset. + This corresponds to the ``input_uri`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + features (:class:`Sequence[~.video_intelligence.Feature]`): + Required. Requested video annotation + features. + This corresponds to the ``features`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, str]]): Strings which should be + sent along with the request as metadata. + + Returns: + ~.operation.Operation: + An object representing a long-running operation. + + The result type for the operation will be + :class:``~.video_intelligence.AnnotateVideoResponse``: + Video annotation response. Included in the ``response`` + field of the ``Operation`` returned by the + ``GetOperation`` call of the + ``google::longrunning::Operations`` service. + + """ + # Create or coerce a protobuf request object. + # Sanity check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + has_flattened_params = any([input_uri, features]) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # Minor optimization to avoid making a copy if the user passes + # in a video_intelligence.AnnotateVideoRequest. + # There's no risk of modifying the input as we've already verified + # there are no flattened fields. + if not isinstance(request, video_intelligence.AnnotateVideoRequest): + request = video_intelligence.AnnotateVideoRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + + if input_uri is not None: + request.input_uri = input_uri + + if features: + request.features.extend(features) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.annotate_video] + + # Send the request. + response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,) + + # Wrap the response in an operation future. + response = operation.from_gapic( + response, + self._transport.operations_client, + video_intelligence.AnnotateVideoResponse, + metadata_type=video_intelligence.AnnotateVideoProgress, + ) + + # Done; return the response. + return response + + +try: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=pkg_resources.get_distribution( + "google-cloud-videointelligence", + ).version, + ) +except pkg_resources.DistributionNotFound: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo() + + +__all__ = ("VideoIntelligenceServiceClient",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/transports/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/transports/__init__.py new file mode 100644 index 000000000000..08de302230ba --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/transports/__init__.py @@ -0,0 +1,38 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from collections import OrderedDict +from typing import Dict, Type + +from .base import VideoIntelligenceServiceTransport +from .grpc import VideoIntelligenceServiceGrpcTransport +from .grpc_asyncio import VideoIntelligenceServiceGrpcAsyncIOTransport + + +# Compile a registry of transports. +_transport_registry = ( + OrderedDict() +) # type: Dict[str, Type[VideoIntelligenceServiceTransport]] +_transport_registry["grpc"] = VideoIntelligenceServiceGrpcTransport +_transport_registry["grpc_asyncio"] = VideoIntelligenceServiceGrpcAsyncIOTransport + + +__all__ = ( + "VideoIntelligenceServiceTransport", + "VideoIntelligenceServiceGrpcTransport", + "VideoIntelligenceServiceGrpcAsyncIOTransport", +) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/transports/base.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/transports/base.py new file mode 100644 index 000000000000..774e7752750c --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/transports/base.py @@ -0,0 +1,141 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import abc +import typing +import pkg_resources + +from google import auth # type: ignore +from google.api_core import exceptions # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google.api_core import retry as retries # type: ignore +from google.api_core import operations_v1 # type: ignore +from google.auth import credentials # type: ignore + +from google.cloud.videointelligence_v1p2beta1.types import video_intelligence +from google.longrunning import operations_pb2 as operations # type: ignore + + +try: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=pkg_resources.get_distribution( + "google-cloud-videointelligence", + ).version, + ) +except pkg_resources.DistributionNotFound: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo() + + +class VideoIntelligenceServiceTransport(abc.ABC): + """Abstract transport class for VideoIntelligenceService.""" + + AUTH_SCOPES = ("https://www.googleapis.com/auth/cloud-platform",) + + def __init__( + self, + *, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: typing.Optional[str] = None, + scopes: typing.Optional[typing.Sequence[str]] = AUTH_SCOPES, + quota_project_id: typing.Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + **kwargs, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): The hostname to connect to. + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is mutually exclusive with credentials. + scope (Optional[Sequence[str]]): A list of scopes. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + """ + # Save the hostname. Default to port 443 (HTTPS) if none is specified. + if ":" not in host: + host += ":443" + self._host = host + + # If no credentials are provided, then determine the appropriate + # defaults. + if credentials and credentials_file: + raise exceptions.DuplicateCredentialArgs( + "'credentials_file' and 'credentials' are mutually exclusive" + ) + + if credentials_file is not None: + credentials, _ = auth.load_credentials_from_file( + credentials_file, scopes=scopes, quota_project_id=quota_project_id + ) + + elif credentials is None: + credentials, _ = auth.default( + scopes=scopes, quota_project_id=quota_project_id + ) + + # Save the credentials. + self._credentials = credentials + + # Lifted into its own function so it can be stubbed out during tests. + self._prep_wrapped_messages(client_info) + + def _prep_wrapped_messages(self, client_info): + # Precompute the wrapped methods. + self._wrapped_methods = { + self.annotate_video: gapic_v1.method.wrap_method( + self.annotate_video, + default_retry=retries.Retry( + initial=1.0, + maximum=120.0, + multiplier=2.5, + predicate=retries.if_exception_type( + exceptions.DeadlineExceeded, exceptions.ServiceUnavailable, + ), + ), + default_timeout=600.0, + client_info=client_info, + ), + } + + @property + def operations_client(self) -> operations_v1.OperationsClient: + """Return the client designed to process long-running operations.""" + raise NotImplementedError() + + @property + def annotate_video( + self, + ) -> typing.Callable[ + [video_intelligence.AnnotateVideoRequest], + typing.Union[operations.Operation, typing.Awaitable[operations.Operation]], + ]: + raise NotImplementedError() + + +__all__ = ("VideoIntelligenceServiceTransport",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/transports/grpc.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/transports/grpc.py new file mode 100644 index 000000000000..6210e2e0126e --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/transports/grpc.py @@ -0,0 +1,281 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import warnings +from typing import Callable, Dict, Optional, Sequence, Tuple + +from google.api_core import grpc_helpers # type: ignore +from google.api_core import operations_v1 # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google import auth # type: ignore +from google.auth import credentials # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore + +import grpc # type: ignore + +from google.cloud.videointelligence_v1p2beta1.types import video_intelligence +from google.longrunning import operations_pb2 as operations # type: ignore + +from .base import VideoIntelligenceServiceTransport, DEFAULT_CLIENT_INFO + + +class VideoIntelligenceServiceGrpcTransport(VideoIntelligenceServiceTransport): + """gRPC backend transport for VideoIntelligenceService. + + Service that implements Google Cloud Video Intelligence API. + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends protocol buffers over the wire using gRPC (which is built on + top of HTTP/2); the ``grpcio`` package must be installed. + """ + + _stubs: Dict[str, Callable] + + def __init__( + self, + *, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: str = None, + scopes: Sequence[str] = None, + channel: grpc.Channel = None, + api_mtls_endpoint: str = None, + client_cert_source: Callable[[], Tuple[bytes, bytes]] = None, + ssl_channel_credentials: grpc.ChannelCredentials = None, + quota_project_id: Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): The hostname to connect to. + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + This argument is ignored if ``channel`` is provided. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if ``channel`` is provided. + scopes (Optional(Sequence[str])): A list of scopes. This argument is + ignored if ``channel`` is provided. + channel (Optional[grpc.Channel]): A ``Channel`` instance through + which to make calls. + api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint. + If provided, it overrides the ``host`` argument and tries to create + a mutual TLS channel with client SSL credentials from + ``client_cert_source`` or applicatin default SSL credentials. + client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]): + Deprecated. A callback to provide client SSL certificate bytes and + private key bytes, both in PEM format. It is ignored if + ``api_mtls_endpoint`` is None. + ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials + for grpc channel. It is ignored if ``channel`` is provided. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If mutual TLS transport + creation failed for any reason. + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + self._ssl_channel_credentials = ssl_channel_credentials + + if channel: + # Sanity check: Ensure that channel and credentials are not both + # provided. + credentials = False + + # If a channel was explicitly provided, set it. + self._grpc_channel = channel + self._ssl_channel_credentials = None + elif api_mtls_endpoint: + warnings.warn( + "api_mtls_endpoint and client_cert_source are deprecated", + DeprecationWarning, + ) + + host = ( + api_mtls_endpoint + if ":" in api_mtls_endpoint + else api_mtls_endpoint + ":443" + ) + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # Create SSL credentials with client_cert_source or application + # default SSL credentials. + if client_cert_source: + cert, key = client_cert_source() + ssl_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + else: + ssl_credentials = SslCredentials().ssl_credentials + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + self._ssl_channel_credentials = ssl_credentials + else: + host = host if ":" in host else host + ":443" + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_channel_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + + self._stubs = {} # type: Dict[str, Callable] + + # Run the base constructor. + super().__init__( + host=host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + client_info=client_info, + ) + + @classmethod + def create_channel( + cls, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: str = None, + scopes: Optional[Sequence[str]] = None, + quota_project_id: Optional[str] = None, + **kwargs, + ) -> grpc.Channel: + """Create and return a gRPC channel object. + Args: + address (Optionsl[str]): The host for the channel to use. + credentials (Optional[~.Credentials]): The + authorization credentials to attach to requests. These + credentials identify this application to the service. If + none are specified, the client will attempt to ascertain + the credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is mutually exclusive with credentials. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + kwargs (Optional[dict]): Keyword arguments, which are passed to the + channel creation. + Returns: + grpc.Channel: A gRPC channel object. + + Raises: + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + scopes = scopes or cls.AUTH_SCOPES + return grpc_helpers.create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes, + quota_project_id=quota_project_id, + **kwargs, + ) + + @property + def grpc_channel(self) -> grpc.Channel: + """Return the channel designed to connect to this service. + """ + return self._grpc_channel + + @property + def operations_client(self) -> operations_v1.OperationsClient: + """Create the client designed to process long-running operations. + + This property caches on the instance; repeated calls return the same + client. + """ + # Sanity check: Only create a new client if we do not already have one. + if "operations_client" not in self.__dict__: + self.__dict__["operations_client"] = operations_v1.OperationsClient( + self.grpc_channel + ) + + # Return the client from cache. + return self.__dict__["operations_client"] + + @property + def annotate_video( + self, + ) -> Callable[[video_intelligence.AnnotateVideoRequest], operations.Operation]: + r"""Return a callable for the annotate video method over gRPC. + + Performs asynchronous video annotation. Progress and results can + be retrieved through the ``google.longrunning.Operations`` + interface. ``Operation.metadata`` contains + ``AnnotateVideoProgress`` (progress). ``Operation.response`` + contains ``AnnotateVideoResponse`` (results). + + Returns: + Callable[[~.AnnotateVideoRequest], + ~.Operation]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "annotate_video" not in self._stubs: + self._stubs["annotate_video"] = self.grpc_channel.unary_unary( + "/google.cloud.videointelligence.v1p2beta1.VideoIntelligenceService/AnnotateVideo", + request_serializer=video_intelligence.AnnotateVideoRequest.serialize, + response_deserializer=operations.Operation.FromString, + ) + return self._stubs["annotate_video"] + + +__all__ = ("VideoIntelligenceServiceGrpcTransport",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/transports/grpc_asyncio.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/transports/grpc_asyncio.py new file mode 100644 index 000000000000..46e6e3f374c4 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/services/video_intelligence_service/transports/grpc_asyncio.py @@ -0,0 +1,287 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import warnings +from typing import Awaitable, Callable, Dict, Optional, Sequence, Tuple + +from google.api_core import gapic_v1 # type: ignore +from google.api_core import grpc_helpers_async # type: ignore +from google.api_core import operations_v1 # type: ignore +from google import auth # type: ignore +from google.auth import credentials # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore + +import grpc # type: ignore +from grpc.experimental import aio # type: ignore + +from google.cloud.videointelligence_v1p2beta1.types import video_intelligence +from google.longrunning import operations_pb2 as operations # type: ignore + +from .base import VideoIntelligenceServiceTransport, DEFAULT_CLIENT_INFO +from .grpc import VideoIntelligenceServiceGrpcTransport + + +class VideoIntelligenceServiceGrpcAsyncIOTransport(VideoIntelligenceServiceTransport): + """gRPC AsyncIO backend transport for VideoIntelligenceService. + + Service that implements Google Cloud Video Intelligence API. + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends protocol buffers over the wire using gRPC (which is built on + top of HTTP/2); the ``grpcio`` package must be installed. + """ + + _grpc_channel: aio.Channel + _stubs: Dict[str, Callable] = {} + + @classmethod + def create_channel( + cls, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + quota_project_id: Optional[str] = None, + **kwargs, + ) -> aio.Channel: + """Create and return a gRPC AsyncIO channel object. + Args: + address (Optional[str]): The host for the channel to use. + credentials (Optional[~.Credentials]): The + authorization credentials to attach to requests. These + credentials identify this application to the service. If + none are specified, the client will attempt to ascertain + the credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if ``channel`` is provided. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + kwargs (Optional[dict]): Keyword arguments, which are passed to the + channel creation. + Returns: + aio.Channel: A gRPC AsyncIO channel object. + """ + scopes = scopes or cls.AUTH_SCOPES + return grpc_helpers_async.create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes, + quota_project_id=quota_project_id, + **kwargs, + ) + + def __init__( + self, + *, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + channel: aio.Channel = None, + api_mtls_endpoint: str = None, + client_cert_source: Callable[[], Tuple[bytes, bytes]] = None, + ssl_channel_credentials: grpc.ChannelCredentials = None, + quota_project_id=None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): The hostname to connect to. + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + This argument is ignored if ``channel`` is provided. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if ``channel`` is provided. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + channel (Optional[aio.Channel]): A ``Channel`` instance through + which to make calls. + api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint. + If provided, it overrides the ``host`` argument and tries to create + a mutual TLS channel with client SSL credentials from + ``client_cert_source`` or applicatin default SSL credentials. + client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]): + Deprecated. A callback to provide client SSL certificate bytes and + private key bytes, both in PEM format. It is ignored if + ``api_mtls_endpoint`` is None. + ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials + for grpc channel. It is ignored if ``channel`` is provided. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport + creation failed for any reason. + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + self._ssl_channel_credentials = ssl_channel_credentials + + if channel: + # Sanity check: Ensure that channel and credentials are not both + # provided. + credentials = False + + # If a channel was explicitly provided, set it. + self._grpc_channel = channel + self._ssl_channel_credentials = None + elif api_mtls_endpoint: + warnings.warn( + "api_mtls_endpoint and client_cert_source are deprecated", + DeprecationWarning, + ) + + host = ( + api_mtls_endpoint + if ":" in api_mtls_endpoint + else api_mtls_endpoint + ":443" + ) + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # Create SSL credentials with client_cert_source or application + # default SSL credentials. + if client_cert_source: + cert, key = client_cert_source() + ssl_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + else: + ssl_credentials = SslCredentials().ssl_credentials + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + self._ssl_channel_credentials = ssl_credentials + else: + host = host if ":" in host else host + ":443" + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_channel_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + + # Run the base constructor. + super().__init__( + host=host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + client_info=client_info, + ) + + self._stubs = {} + + @property + def grpc_channel(self) -> aio.Channel: + """Create the channel designed to connect to this service. + + This property caches on the instance; repeated calls return + the same channel. + """ + # Return the channel from cache. + return self._grpc_channel + + @property + def operations_client(self) -> operations_v1.OperationsAsyncClient: + """Create the client designed to process long-running operations. + + This property caches on the instance; repeated calls return the same + client. + """ + # Sanity check: Only create a new client if we do not already have one. + if "operations_client" not in self.__dict__: + self.__dict__["operations_client"] = operations_v1.OperationsAsyncClient( + self.grpc_channel + ) + + # Return the client from cache. + return self.__dict__["operations_client"] + + @property + def annotate_video( + self, + ) -> Callable[ + [video_intelligence.AnnotateVideoRequest], Awaitable[operations.Operation] + ]: + r"""Return a callable for the annotate video method over gRPC. + + Performs asynchronous video annotation. Progress and results can + be retrieved through the ``google.longrunning.Operations`` + interface. ``Operation.metadata`` contains + ``AnnotateVideoProgress`` (progress). ``Operation.response`` + contains ``AnnotateVideoResponse`` (results). + + Returns: + Callable[[~.AnnotateVideoRequest], + Awaitable[~.Operation]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "annotate_video" not in self._stubs: + self._stubs["annotate_video"] = self.grpc_channel.unary_unary( + "/google.cloud.videointelligence.v1p2beta1.VideoIntelligenceService/AnnotateVideo", + request_serializer=video_intelligence.AnnotateVideoRequest.serialize, + response_deserializer=operations.Operation.FromString, + ) + return self._stubs["annotate_video"] + + +__all__ = ("VideoIntelligenceServiceGrpcAsyncIOTransport",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/types.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/types.py deleted file mode 100644 index f3dc467941ff..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/types.py +++ /dev/null @@ -1,56 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from __future__ import absolute_import -import sys - -from google.api_core.protobuf_helpers import get_messages - -from google.cloud.videointelligence_v1p2beta1.proto import video_intelligence_pb2 -from google.longrunning import operations_pb2 -from google.protobuf import any_pb2 -from google.protobuf import duration_pb2 -from google.protobuf import timestamp_pb2 -from google.rpc import status_pb2 - - -_shared_modules = [ - operations_pb2, - any_pb2, - duration_pb2, - timestamp_pb2, - status_pb2, -] - -_local_modules = [ - video_intelligence_pb2, -] - -names = [] - -for module in _shared_modules: # pragma: NO COVER - for name, message in get_messages(module).items(): - setattr(sys.modules[__name__], name, message) - names.append(name) -for module in _local_modules: - for name, message in get_messages(module).items(): - message.__module__ = "google.cloud.videointelligence_v1p2beta1.types" - setattr(sys.modules[__name__], name, message) - names.append(name) - - -__all__ = tuple(sorted(names)) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/types/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/types/__init__.py new file mode 100644 index 000000000000..cefd5c141d8f --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/types/__init__.py @@ -0,0 +1,73 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from .video_intelligence import ( + AnnotateVideoRequest, + VideoContext, + LabelDetectionConfig, + ShotChangeDetectionConfig, + ExplicitContentDetectionConfig, + TextDetectionConfig, + VideoSegment, + LabelSegment, + LabelFrame, + Entity, + LabelAnnotation, + ExplicitContentFrame, + ExplicitContentAnnotation, + NormalizedBoundingBox, + VideoAnnotationResults, + AnnotateVideoResponse, + VideoAnnotationProgress, + AnnotateVideoProgress, + NormalizedVertex, + NormalizedBoundingPoly, + TextSegment, + TextFrame, + TextAnnotation, + ObjectTrackingFrame, + ObjectTrackingAnnotation, +) + + +__all__ = ( + "AnnotateVideoRequest", + "VideoContext", + "LabelDetectionConfig", + "ShotChangeDetectionConfig", + "ExplicitContentDetectionConfig", + "TextDetectionConfig", + "VideoSegment", + "LabelSegment", + "LabelFrame", + "Entity", + "LabelAnnotation", + "ExplicitContentFrame", + "ExplicitContentAnnotation", + "NormalizedBoundingBox", + "VideoAnnotationResults", + "AnnotateVideoResponse", + "VideoAnnotationProgress", + "AnnotateVideoProgress", + "NormalizedVertex", + "NormalizedBoundingPoly", + "TextSegment", + "TextFrame", + "TextAnnotation", + "ObjectTrackingFrame", + "ObjectTrackingAnnotation", +) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/types/video_intelligence.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/types/video_intelligence.py new file mode 100644 index 000000000000..2f533ed86796 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p2beta1/types/video_intelligence.py @@ -0,0 +1,687 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import proto # type: ignore + + +from google.protobuf import duration_pb2 as duration # type: ignore +from google.protobuf import timestamp_pb2 as timestamp # type: ignore +from google.rpc import status_pb2 as status # type: ignore + + +__protobuf__ = proto.module( + package="google.cloud.videointelligence.v1p2beta1", + manifest={ + "Feature", + "LabelDetectionMode", + "Likelihood", + "AnnotateVideoRequest", + "VideoContext", + "LabelDetectionConfig", + "ShotChangeDetectionConfig", + "ExplicitContentDetectionConfig", + "TextDetectionConfig", + "VideoSegment", + "LabelSegment", + "LabelFrame", + "Entity", + "LabelAnnotation", + "ExplicitContentFrame", + "ExplicitContentAnnotation", + "NormalizedBoundingBox", + "VideoAnnotationResults", + "AnnotateVideoResponse", + "VideoAnnotationProgress", + "AnnotateVideoProgress", + "NormalizedVertex", + "NormalizedBoundingPoly", + "TextSegment", + "TextFrame", + "TextAnnotation", + "ObjectTrackingFrame", + "ObjectTrackingAnnotation", + }, +) + + +class Feature(proto.Enum): + r"""Video annotation feature.""" + FEATURE_UNSPECIFIED = 0 + LABEL_DETECTION = 1 + SHOT_CHANGE_DETECTION = 2 + EXPLICIT_CONTENT_DETECTION = 3 + TEXT_DETECTION = 7 + OBJECT_TRACKING = 9 + + +class LabelDetectionMode(proto.Enum): + r"""Label detection mode.""" + LABEL_DETECTION_MODE_UNSPECIFIED = 0 + SHOT_MODE = 1 + FRAME_MODE = 2 + SHOT_AND_FRAME_MODE = 3 + + +class Likelihood(proto.Enum): + r"""Bucketized representation of likelihood.""" + LIKELIHOOD_UNSPECIFIED = 0 + VERY_UNLIKELY = 1 + UNLIKELY = 2 + POSSIBLE = 3 + LIKELY = 4 + VERY_LIKELY = 5 + + +class AnnotateVideoRequest(proto.Message): + r"""Video annotation request. + + Attributes: + input_uri (str): + Input video location. Currently, only `Google Cloud + Storage `__ URIs are + supported, which must be specified in the following format: + ``gs://bucket-id/object-id`` (other URI formats return + [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). + For more information, see `Request + URIs `__. + A video URI may include wildcards in ``object-id``, and thus + identify multiple videos. Supported wildcards: '*' to match + 0 or more characters; '?' to match 1 character. If unset, + the input video should be embedded in the request as + ``input_content``. If set, ``input_content`` should be + unset. + input_content (bytes): + The video data bytes. If unset, the input video(s) should be + specified via ``input_uri``. If set, ``input_uri`` should be + unset. + features (Sequence[~.video_intelligence.Feature]): + Required. Requested video annotation + features. + video_context (~.video_intelligence.VideoContext): + Additional video context and/or feature- + pecific parameters. + output_uri (str): + Optional. Location where the output (in JSON format) should + be stored. Currently, only `Google Cloud + Storage `__ URIs are + supported, which must be specified in the following format: + ``gs://bucket-id/object-id`` (other URI formats return + [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). + For more information, see `Request + URIs `__. + location_id (str): + Optional. Cloud region where annotation should take place. + Supported cloud regions: ``us-east1``, ``us-west1``, + ``europe-west1``, ``asia-east1``. If no region is specified, + a region will be determined based on video file location. + """ + + input_uri = proto.Field(proto.STRING, number=1) + + input_content = proto.Field(proto.BYTES, number=6) + + features = proto.RepeatedField(proto.ENUM, number=2, enum="Feature",) + + video_context = proto.Field(proto.MESSAGE, number=3, message="VideoContext",) + + output_uri = proto.Field(proto.STRING, number=4) + + location_id = proto.Field(proto.STRING, number=5) + + +class VideoContext(proto.Message): + r"""Video context and/or feature-specific parameters. + + Attributes: + segments (Sequence[~.video_intelligence.VideoSegment]): + Video segments to annotate. The segments may + overlap and are not required to be contiguous or + span the whole video. If unspecified, each video + is treated as a single segment. + label_detection_config (~.video_intelligence.LabelDetectionConfig): + Config for LABEL_DETECTION. + shot_change_detection_config (~.video_intelligence.ShotChangeDetectionConfig): + Config for SHOT_CHANGE_DETECTION. + explicit_content_detection_config (~.video_intelligence.ExplicitContentDetectionConfig): + Config for EXPLICIT_CONTENT_DETECTION. + text_detection_config (~.video_intelligence.TextDetectionConfig): + Config for TEXT_DETECTION. + """ + + segments = proto.RepeatedField(proto.MESSAGE, number=1, message="VideoSegment",) + + label_detection_config = proto.Field( + proto.MESSAGE, number=2, message="LabelDetectionConfig", + ) + + shot_change_detection_config = proto.Field( + proto.MESSAGE, number=3, message="ShotChangeDetectionConfig", + ) + + explicit_content_detection_config = proto.Field( + proto.MESSAGE, number=4, message="ExplicitContentDetectionConfig", + ) + + text_detection_config = proto.Field( + proto.MESSAGE, number=8, message="TextDetectionConfig", + ) + + +class LabelDetectionConfig(proto.Message): + r"""Config for LABEL_DETECTION. + + Attributes: + label_detection_mode (~.video_intelligence.LabelDetectionMode): + What labels should be detected with LABEL_DETECTION, in + addition to video-level labels or segment-level labels. If + unspecified, defaults to ``SHOT_MODE``. + stationary_camera (bool): + Whether the video has been shot from a stationary (i.e. + non-moving) camera. When set to true, might improve + detection accuracy for moving objects. Should be used with + ``SHOT_AND_FRAME_MODE`` enabled. + model (str): + Model to use for label detection. + Supported values: "builtin/stable" (the default + if unset) and "builtin/latest". + """ + + label_detection_mode = proto.Field(proto.ENUM, number=1, enum="LabelDetectionMode",) + + stationary_camera = proto.Field(proto.BOOL, number=2) + + model = proto.Field(proto.STRING, number=3) + + +class ShotChangeDetectionConfig(proto.Message): + r"""Config for SHOT_CHANGE_DETECTION. + + Attributes: + model (str): + Model to use for shot change detection. + Supported values: "builtin/stable" (the default + if unset) and "builtin/latest". + """ + + model = proto.Field(proto.STRING, number=1) + + +class ExplicitContentDetectionConfig(proto.Message): + r"""Config for EXPLICIT_CONTENT_DETECTION. + + Attributes: + model (str): + Model to use for explicit content detection. + Supported values: "builtin/stable" (the default + if unset) and "builtin/latest". + """ + + model = proto.Field(proto.STRING, number=1) + + +class TextDetectionConfig(proto.Message): + r"""Config for TEXT_DETECTION. + + Attributes: + language_hints (Sequence[str]): + Language hint can be specified if the + language to be detected is known a priori. It + can increase the accuracy of the detection. + Language hint must be language code in BCP-47 + format. + + Automatic language detection is performed if no + hint is provided. + """ + + language_hints = proto.RepeatedField(proto.STRING, number=1) + + +class VideoSegment(proto.Message): + r"""Video segment. + + Attributes: + start_time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the start of the segment + (inclusive). + end_time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the end of the segment + (inclusive). + """ + + start_time_offset = proto.Field(proto.MESSAGE, number=1, message=duration.Duration,) + + end_time_offset = proto.Field(proto.MESSAGE, number=2, message=duration.Duration,) + + +class LabelSegment(proto.Message): + r"""Video segment level annotation results for label detection. + + Attributes: + segment (~.video_intelligence.VideoSegment): + Video segment where a label was detected. + confidence (float): + Confidence that the label is accurate. Range: [0, 1]. + """ + + segment = proto.Field(proto.MESSAGE, number=1, message="VideoSegment",) + + confidence = proto.Field(proto.FLOAT, number=2) + + +class LabelFrame(proto.Message): + r"""Video frame level annotation results for label detection. + + Attributes: + time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the video frame for this + location. + confidence (float): + Confidence that the label is accurate. Range: [0, 1]. + """ + + time_offset = proto.Field(proto.MESSAGE, number=1, message=duration.Duration,) + + confidence = proto.Field(proto.FLOAT, number=2) + + +class Entity(proto.Message): + r"""Detected entity from video analysis. + + Attributes: + entity_id (str): + Opaque entity ID. Some IDs may be available in `Google + Knowledge Graph Search + API `__. + description (str): + Textual description, e.g. ``Fixed-gear bicycle``. + language_code (str): + Language code for ``description`` in BCP-47 format. + """ + + entity_id = proto.Field(proto.STRING, number=1) + + description = proto.Field(proto.STRING, number=2) + + language_code = proto.Field(proto.STRING, number=3) + + +class LabelAnnotation(proto.Message): + r"""Label annotation. + + Attributes: + entity (~.video_intelligence.Entity): + Detected entity. + category_entities (Sequence[~.video_intelligence.Entity]): + Common categories for the detected entity. E.g. when the + label is ``Terrier`` the category is likely ``dog``. And in + some cases there might be more than one categories e.g. + ``Terrier`` could also be a ``pet``. + segments (Sequence[~.video_intelligence.LabelSegment]): + All video segments where a label was + detected. + frames (Sequence[~.video_intelligence.LabelFrame]): + All video frames where a label was detected. + """ + + entity = proto.Field(proto.MESSAGE, number=1, message="Entity",) + + category_entities = proto.RepeatedField(proto.MESSAGE, number=2, message="Entity",) + + segments = proto.RepeatedField(proto.MESSAGE, number=3, message="LabelSegment",) + + frames = proto.RepeatedField(proto.MESSAGE, number=4, message="LabelFrame",) + + +class ExplicitContentFrame(proto.Message): + r"""Video frame level annotation results for explicit content. + + Attributes: + time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the video frame for this + location. + pornography_likelihood (~.video_intelligence.Likelihood): + Likelihood of the pornography content.. + """ + + time_offset = proto.Field(proto.MESSAGE, number=1, message=duration.Duration,) + + pornography_likelihood = proto.Field(proto.ENUM, number=2, enum="Likelihood",) + + +class ExplicitContentAnnotation(proto.Message): + r"""Explicit content annotation (based on per-frame visual + signals only). If no explicit content has been detected in a + frame, no annotations are present for that frame. + + Attributes: + frames (Sequence[~.video_intelligence.ExplicitContentFrame]): + All video frames where explicit content was + detected. + """ + + frames = proto.RepeatedField( + proto.MESSAGE, number=1, message="ExplicitContentFrame", + ) + + +class NormalizedBoundingBox(proto.Message): + r"""Normalized bounding box. The normalized vertex coordinates are + relative to the original image. Range: [0, 1]. + + Attributes: + left (float): + Left X coordinate. + top (float): + Top Y coordinate. + right (float): + Right X coordinate. + bottom (float): + Bottom Y coordinate. + """ + + left = proto.Field(proto.FLOAT, number=1) + + top = proto.Field(proto.FLOAT, number=2) + + right = proto.Field(proto.FLOAT, number=3) + + bottom = proto.Field(proto.FLOAT, number=4) + + +class VideoAnnotationResults(proto.Message): + r"""Annotation results for a single video. + + Attributes: + input_uri (str): + Video file location in `Google Cloud + Storage `__. + segment_label_annotations (Sequence[~.video_intelligence.LabelAnnotation]): + Label annotations on video level or user + specified segment level. There is exactly one + element for each unique label. + shot_label_annotations (Sequence[~.video_intelligence.LabelAnnotation]): + Label annotations on shot level. + There is exactly one element for each unique + label. + frame_label_annotations (Sequence[~.video_intelligence.LabelAnnotation]): + Label annotations on frame level. + There is exactly one element for each unique + label. + shot_annotations (Sequence[~.video_intelligence.VideoSegment]): + Shot annotations. Each shot is represented as + a video segment. + explicit_annotation (~.video_intelligence.ExplicitContentAnnotation): + Explicit content annotation. + text_annotations (Sequence[~.video_intelligence.TextAnnotation]): + OCR text detection and tracking. + Annotations for list of detected text snippets. + Each will have list of frame information + associated with it. + object_annotations (Sequence[~.video_intelligence.ObjectTrackingAnnotation]): + Annotations for list of objects detected and + tracked in video. + error (~.status.Status): + If set, indicates an error. Note that for a single + ``AnnotateVideoRequest`` some videos may succeed and some + may fail. + """ + + input_uri = proto.Field(proto.STRING, number=1) + + segment_label_annotations = proto.RepeatedField( + proto.MESSAGE, number=2, message="LabelAnnotation", + ) + + shot_label_annotations = proto.RepeatedField( + proto.MESSAGE, number=3, message="LabelAnnotation", + ) + + frame_label_annotations = proto.RepeatedField( + proto.MESSAGE, number=4, message="LabelAnnotation", + ) + + shot_annotations = proto.RepeatedField( + proto.MESSAGE, number=6, message="VideoSegment", + ) + + explicit_annotation = proto.Field( + proto.MESSAGE, number=7, message="ExplicitContentAnnotation", + ) + + text_annotations = proto.RepeatedField( + proto.MESSAGE, number=12, message="TextAnnotation", + ) + + object_annotations = proto.RepeatedField( + proto.MESSAGE, number=14, message="ObjectTrackingAnnotation", + ) + + error = proto.Field(proto.MESSAGE, number=9, message=status.Status,) + + +class AnnotateVideoResponse(proto.Message): + r"""Video annotation response. Included in the ``response`` field of the + ``Operation`` returned by the ``GetOperation`` call of the + ``google::longrunning::Operations`` service. + + Attributes: + annotation_results (Sequence[~.video_intelligence.VideoAnnotationResults]): + Annotation results for all videos specified in + ``AnnotateVideoRequest``. + """ + + annotation_results = proto.RepeatedField( + proto.MESSAGE, number=1, message="VideoAnnotationResults", + ) + + +class VideoAnnotationProgress(proto.Message): + r"""Annotation progress for a single video. + + Attributes: + input_uri (str): + Video file location in `Google Cloud + Storage `__. + progress_percent (int): + Approximate percentage processed thus far. + Guaranteed to be 100 when fully processed. + start_time (~.timestamp.Timestamp): + Time when the request was received. + update_time (~.timestamp.Timestamp): + Time of the most recent update. + """ + + input_uri = proto.Field(proto.STRING, number=1) + + progress_percent = proto.Field(proto.INT32, number=2) + + start_time = proto.Field(proto.MESSAGE, number=3, message=timestamp.Timestamp,) + + update_time = proto.Field(proto.MESSAGE, number=4, message=timestamp.Timestamp,) + + +class AnnotateVideoProgress(proto.Message): + r"""Video annotation progress. Included in the ``metadata`` field of the + ``Operation`` returned by the ``GetOperation`` call of the + ``google::longrunning::Operations`` service. + + Attributes: + annotation_progress (Sequence[~.video_intelligence.VideoAnnotationProgress]): + Progress metadata for all videos specified in + ``AnnotateVideoRequest``. + """ + + annotation_progress = proto.RepeatedField( + proto.MESSAGE, number=1, message="VideoAnnotationProgress", + ) + + +class NormalizedVertex(proto.Message): + r"""A vertex represents a 2D point in the image. + NOTE: the normalized vertex coordinates are relative to the + original image and range from 0 to 1. + + Attributes: + x (float): + X coordinate. + y (float): + Y coordinate. + """ + + x = proto.Field(proto.FLOAT, number=1) + + y = proto.Field(proto.FLOAT, number=2) + + +class NormalizedBoundingPoly(proto.Message): + r"""Normalized bounding polygon for text (that might not be aligned with + axis). Contains list of the corner points in clockwise order + starting from top-left corner. For example, for a rectangular + bounding box: When the text is horizontal it might look like: 0----1 + \| \| 3----2 + + When it's clockwise rotated 180 degrees around the top-left corner + it becomes: 2----3 \| \| 1----0 + + and the vertex order will still be (0, 1, 2, 3). Note that values + can be less than 0, or greater than 1 due to trignometric + calculations for location of the box. + + Attributes: + vertices (Sequence[~.video_intelligence.NormalizedVertex]): + Normalized vertices of the bounding polygon. + """ + + vertices = proto.RepeatedField(proto.MESSAGE, number=1, message="NormalizedVertex",) + + +class TextSegment(proto.Message): + r"""Video segment level annotation results for text detection. + + Attributes: + segment (~.video_intelligence.VideoSegment): + Video segment where a text snippet was + detected. + confidence (float): + Confidence for the track of detected text. It + is calculated as the highest over all frames + where OCR detected text appears. + frames (Sequence[~.video_intelligence.TextFrame]): + Information related to the frames where OCR + detected text appears. + """ + + segment = proto.Field(proto.MESSAGE, number=1, message="VideoSegment",) + + confidence = proto.Field(proto.FLOAT, number=2) + + frames = proto.RepeatedField(proto.MESSAGE, number=3, message="TextFrame",) + + +class TextFrame(proto.Message): + r"""Video frame level annotation results for text annotation + (OCR). Contains information regarding timestamp and bounding box + locations for the frames containing detected OCR text snippets. + + Attributes: + rotated_bounding_box (~.video_intelligence.NormalizedBoundingPoly): + Bounding polygon of the detected text for + this frame. + time_offset (~.duration.Duration): + Timestamp of this frame. + """ + + rotated_bounding_box = proto.Field( + proto.MESSAGE, number=1, message="NormalizedBoundingPoly", + ) + + time_offset = proto.Field(proto.MESSAGE, number=2, message=duration.Duration,) + + +class TextAnnotation(proto.Message): + r"""Annotations related to one detected OCR text snippet. This + will contain the corresponding text, confidence value, and frame + level information for each detection. + + Attributes: + text (str): + The detected text. + segments (Sequence[~.video_intelligence.TextSegment]): + All video segments where OCR detected text + appears. + """ + + text = proto.Field(proto.STRING, number=1) + + segments = proto.RepeatedField(proto.MESSAGE, number=2, message="TextSegment",) + + +class ObjectTrackingFrame(proto.Message): + r"""Video frame level annotations for object detection and + tracking. This field stores per frame location, time offset, and + confidence. + + Attributes: + normalized_bounding_box (~.video_intelligence.NormalizedBoundingBox): + The normalized bounding box location of this + object track for the frame. + time_offset (~.duration.Duration): + The timestamp of the frame in microseconds. + """ + + normalized_bounding_box = proto.Field( + proto.MESSAGE, number=1, message="NormalizedBoundingBox", + ) + + time_offset = proto.Field(proto.MESSAGE, number=2, message=duration.Duration,) + + +class ObjectTrackingAnnotation(proto.Message): + r"""Annotations corresponding to one tracked object. + + Attributes: + entity (~.video_intelligence.Entity): + Entity to specify the object category that + this track is labeled as. + confidence (float): + Object category's labeling confidence of this + track. + frames (Sequence[~.video_intelligence.ObjectTrackingFrame]): + Information corresponding to all frames where + this object track appears. + segment (~.video_intelligence.VideoSegment): + Each object track corresponds to one video + segment where it appears. + """ + + entity = proto.Field(proto.MESSAGE, number=1, message="Entity",) + + confidence = proto.Field(proto.FLOAT, number=4) + + frames = proto.RepeatedField( + proto.MESSAGE, number=2, message="ObjectTrackingFrame", + ) + + segment = proto.Field(proto.MESSAGE, number=3, message="VideoSegment",) + + +__all__ = tuple(sorted(__protobuf__.manifest)) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/__init__.py index 2b0d752b5d5c..8ae655cdc0c2 100644 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/__init__.py +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/__init__.py @@ -1,62 +1,145 @@ # -*- coding: utf-8 -*- -# + # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # -# https://www.apache.org/licenses/LICENSE-2.0 +# http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. +# - -from __future__ import absolute_import -import sys -import warnings - -from google.cloud.videointelligence_v1p3beta1 import types -from google.cloud.videointelligence_v1p3beta1.gapic import enums -from google.cloud.videointelligence_v1p3beta1.gapic import ( - streaming_video_intelligence_service_client, -) -from google.cloud.videointelligence_v1p3beta1.gapic import ( - video_intelligence_service_client, +from .services.streaming_video_intelligence_service import ( + StreamingVideoIntelligenceServiceClient, ) - - -if sys.version_info[:2] == (2, 7): - message = ( - "A future version of this library will drop support for Python 2.7. " - "More details about Python 2 support for Google Cloud Client Libraries " - "can be found at https://cloud.google.com/python/docs/python2-sunset/" - ) - warnings.warn(message, DeprecationWarning) - - -class VideoIntelligenceServiceClient( - video_intelligence_service_client.VideoIntelligenceServiceClient -): - __doc__ = video_intelligence_service_client.VideoIntelligenceServiceClient.__doc__ - enums = enums - - -class StreamingVideoIntelligenceServiceClient( - streaming_video_intelligence_service_client.StreamingVideoIntelligenceServiceClient -): - __doc__ = ( - streaming_video_intelligence_service_client.StreamingVideoIntelligenceServiceClient.__doc__ - ) - enums = enums +from .services.video_intelligence_service import VideoIntelligenceServiceClient +from .types.video_intelligence import AnnotateVideoProgress +from .types.video_intelligence import AnnotateVideoRequest +from .types.video_intelligence import AnnotateVideoResponse +from .types.video_intelligence import Celebrity +from .types.video_intelligence import CelebrityRecognitionAnnotation +from .types.video_intelligence import CelebrityTrack +from .types.video_intelligence import DetectedAttribute +from .types.video_intelligence import DetectedLandmark +from .types.video_intelligence import Entity +from .types.video_intelligence import ExplicitContentAnnotation +from .types.video_intelligence import ExplicitContentDetectionConfig +from .types.video_intelligence import ExplicitContentFrame +from .types.video_intelligence import FaceDetectionAnnotation +from .types.video_intelligence import FaceDetectionConfig +from .types.video_intelligence import Feature +from .types.video_intelligence import LabelAnnotation +from .types.video_intelligence import LabelDetectionConfig +from .types.video_intelligence import LabelDetectionMode +from .types.video_intelligence import LabelFrame +from .types.video_intelligence import LabelSegment +from .types.video_intelligence import Likelihood +from .types.video_intelligence import LogoRecognitionAnnotation +from .types.video_intelligence import NormalizedBoundingBox +from .types.video_intelligence import NormalizedBoundingPoly +from .types.video_intelligence import NormalizedVertex +from .types.video_intelligence import ObjectTrackingAnnotation +from .types.video_intelligence import ObjectTrackingConfig +from .types.video_intelligence import ObjectTrackingFrame +from .types.video_intelligence import PersonDetectionAnnotation +from .types.video_intelligence import PersonDetectionConfig +from .types.video_intelligence import ShotChangeDetectionConfig +from .types.video_intelligence import SpeechContext +from .types.video_intelligence import SpeechRecognitionAlternative +from .types.video_intelligence import SpeechTranscription +from .types.video_intelligence import SpeechTranscriptionConfig +from .types.video_intelligence import StreamingAnnotateVideoRequest +from .types.video_intelligence import StreamingAnnotateVideoResponse +from .types.video_intelligence import StreamingAutomlActionRecognitionConfig +from .types.video_intelligence import StreamingAutomlClassificationConfig +from .types.video_intelligence import StreamingAutomlObjectTrackingConfig +from .types.video_intelligence import StreamingExplicitContentDetectionConfig +from .types.video_intelligence import StreamingFeature +from .types.video_intelligence import StreamingLabelDetectionConfig +from .types.video_intelligence import StreamingObjectTrackingConfig +from .types.video_intelligence import StreamingShotChangeDetectionConfig +from .types.video_intelligence import StreamingStorageConfig +from .types.video_intelligence import StreamingVideoAnnotationResults +from .types.video_intelligence import StreamingVideoConfig +from .types.video_intelligence import TextAnnotation +from .types.video_intelligence import TextDetectionConfig +from .types.video_intelligence import TextFrame +from .types.video_intelligence import TextSegment +from .types.video_intelligence import TimestampedObject +from .types.video_intelligence import Track +from .types.video_intelligence import VideoAnnotationProgress +from .types.video_intelligence import VideoAnnotationResults +from .types.video_intelligence import VideoContext +from .types.video_intelligence import VideoSegment +from .types.video_intelligence import WordInfo __all__ = ( - "enums", - "types", - "VideoIntelligenceServiceClient", + "AnnotateVideoProgress", + "AnnotateVideoRequest", + "AnnotateVideoResponse", + "Celebrity", + "CelebrityRecognitionAnnotation", + "CelebrityTrack", + "DetectedAttribute", + "DetectedLandmark", + "Entity", + "ExplicitContentAnnotation", + "ExplicitContentDetectionConfig", + "ExplicitContentFrame", + "FaceDetectionAnnotation", + "FaceDetectionConfig", + "Feature", + "LabelAnnotation", + "LabelDetectionConfig", + "LabelDetectionMode", + "LabelFrame", + "LabelSegment", + "Likelihood", + "LogoRecognitionAnnotation", + "NormalizedBoundingBox", + "NormalizedBoundingPoly", + "NormalizedVertex", + "ObjectTrackingAnnotation", + "ObjectTrackingConfig", + "ObjectTrackingFrame", + "PersonDetectionAnnotation", + "PersonDetectionConfig", + "ShotChangeDetectionConfig", + "SpeechContext", + "SpeechRecognitionAlternative", + "SpeechTranscription", + "SpeechTranscriptionConfig", + "StreamingAnnotateVideoRequest", + "StreamingAnnotateVideoResponse", + "StreamingAutomlActionRecognitionConfig", + "StreamingAutomlClassificationConfig", + "StreamingAutomlObjectTrackingConfig", + "StreamingExplicitContentDetectionConfig", + "StreamingFeature", + "StreamingLabelDetectionConfig", + "StreamingObjectTrackingConfig", + "StreamingShotChangeDetectionConfig", + "StreamingStorageConfig", + "StreamingVideoAnnotationResults", + "StreamingVideoConfig", "StreamingVideoIntelligenceServiceClient", + "TextAnnotation", + "TextDetectionConfig", + "TextFrame", + "TextSegment", + "TimestampedObject", + "Track", + "VideoAnnotationProgress", + "VideoAnnotationResults", + "VideoContext", + "VideoSegment", + "WordInfo", + "VideoIntelligenceServiceClient", ) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/__init__.py deleted file mode 100644 index e69de29bb2d1..000000000000 diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/enums.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/enums.py deleted file mode 100644 index b4ba7b25c597..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/enums.py +++ /dev/null @@ -1,113 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Wrappers for protocol buffer enum types.""" - -import enum - - -class Feature(enum.IntEnum): - """ - Video annotation feature. - - Attributes: - FEATURE_UNSPECIFIED (int): Unspecified. - LABEL_DETECTION (int): Label detection. Detect objects, such as dog or flower. - SHOT_CHANGE_DETECTION (int): Shot change detection. - EXPLICIT_CONTENT_DETECTION (int): Explicit content detection. - FACE_DETECTION (int): Human face detection. - SPEECH_TRANSCRIPTION (int): Speech transcription. - TEXT_DETECTION (int): OCR text detection and tracking. - OBJECT_TRACKING (int): Object detection and tracking. - LOGO_RECOGNITION (int): Logo detection, tracking, and recognition. - CELEBRITY_RECOGNITION (int): Celebrity recognition. - PERSON_DETECTION (int): Person detection. - """ - - FEATURE_UNSPECIFIED = 0 - LABEL_DETECTION = 1 - SHOT_CHANGE_DETECTION = 2 - EXPLICIT_CONTENT_DETECTION = 3 - FACE_DETECTION = 4 - SPEECH_TRANSCRIPTION = 6 - TEXT_DETECTION = 7 - OBJECT_TRACKING = 9 - LOGO_RECOGNITION = 12 - CELEBRITY_RECOGNITION = 13 - PERSON_DETECTION = 14 - - -class LabelDetectionMode(enum.IntEnum): - """ - Label detection mode. - - Attributes: - LABEL_DETECTION_MODE_UNSPECIFIED (int): Unspecified. - SHOT_MODE (int): Detect shot-level labels. - FRAME_MODE (int): Detect frame-level labels. - SHOT_AND_FRAME_MODE (int): Detect both shot-level and frame-level labels. - """ - - LABEL_DETECTION_MODE_UNSPECIFIED = 0 - SHOT_MODE = 1 - FRAME_MODE = 2 - SHOT_AND_FRAME_MODE = 3 - - -class Likelihood(enum.IntEnum): - """ - Bucketized representation of likelihood. - - Attributes: - LIKELIHOOD_UNSPECIFIED (int): Unspecified likelihood. - VERY_UNLIKELY (int): Very unlikely. - UNLIKELY (int): Unlikely. - POSSIBLE (int): Possible. - LIKELY (int): Likely. - VERY_LIKELY (int): Very likely. - """ - - LIKELIHOOD_UNSPECIFIED = 0 - VERY_UNLIKELY = 1 - UNLIKELY = 2 - POSSIBLE = 3 - LIKELY = 4 - VERY_LIKELY = 5 - - -class StreamingFeature(enum.IntEnum): - """ - Streaming video annotation feature. - - Attributes: - STREAMING_FEATURE_UNSPECIFIED (int): Unspecified. - STREAMING_LABEL_DETECTION (int): Label detection. Detect objects, such as dog or flower. - STREAMING_SHOT_CHANGE_DETECTION (int): Shot change detection. - STREAMING_EXPLICIT_CONTENT_DETECTION (int): Explicit content detection. - STREAMING_OBJECT_TRACKING (int): Object detection and tracking. - STREAMING_AUTOML_ACTION_RECOGNITION (int): Action recognition based on AutoML model. - STREAMING_AUTOML_CLASSIFICATION (int): Video classification based on AutoML model. - STREAMING_AUTOML_OBJECT_TRACKING (int): Object detection and tracking based on AutoML model. - """ - - STREAMING_FEATURE_UNSPECIFIED = 0 - STREAMING_LABEL_DETECTION = 1 - STREAMING_SHOT_CHANGE_DETECTION = 2 - STREAMING_EXPLICIT_CONTENT_DETECTION = 3 - STREAMING_OBJECT_TRACKING = 4 - STREAMING_AUTOML_ACTION_RECOGNITION = 23 - STREAMING_AUTOML_CLASSIFICATION = 21 - STREAMING_AUTOML_OBJECT_TRACKING = 22 diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/streaming_video_intelligence_service_client.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/streaming_video_intelligence_service_client.py deleted file mode 100644 index 2a46f4dbf8db..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/streaming_video_intelligence_service_client.py +++ /dev/null @@ -1,252 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Accesses the google.cloud.videointelligence.v1p3beta1 StreamingVideoIntelligenceService API.""" - -import pkg_resources -import warnings - -from google.oauth2 import service_account -import google.api_core.client_options -import google.api_core.gapic_v1.client_info -import google.api_core.gapic_v1.config -import google.api_core.gapic_v1.method -import google.api_core.grpc_helpers -import google.api_core.protobuf_helpers -import grpc - -from google.cloud.videointelligence_v1p3beta1.gapic import enums -from google.cloud.videointelligence_v1p3beta1.gapic import ( - streaming_video_intelligence_service_client_config, -) -from google.cloud.videointelligence_v1p3beta1.gapic.transports import ( - streaming_video_intelligence_service_grpc_transport, -) -from google.cloud.videointelligence_v1p3beta1.proto import video_intelligence_pb2 -from google.cloud.videointelligence_v1p3beta1.proto import video_intelligence_pb2_grpc -from google.longrunning import operations_pb2 - - -_GAPIC_LIBRARY_VERSION = pkg_resources.get_distribution( - "google-cloud-videointelligence", -).version - - -class StreamingVideoIntelligenceServiceClient(object): - """Service that implements streaming Video Intelligence API.""" - - SERVICE_ADDRESS = "videointelligence.googleapis.com:443" - """The default address of the service.""" - - # The name of the interface for this client. This is the key used to - # find the method configuration in the client_config dictionary. - _INTERFACE_NAME = ( - "google.cloud.videointelligence.v1p3beta1.StreamingVideoIntelligenceService" - ) - - @classmethod - def from_service_account_file(cls, filename, *args, **kwargs): - """Creates an instance of this client using the provided credentials - file. - - Args: - filename (str): The path to the service account private key json - file. - args: Additional arguments to pass to the constructor. - kwargs: Additional arguments to pass to the constructor. - - Returns: - StreamingVideoIntelligenceServiceClient: The constructed client. - """ - credentials = service_account.Credentials.from_service_account_file(filename) - kwargs["credentials"] = credentials - return cls(*args, **kwargs) - - from_service_account_json = from_service_account_file - - def __init__( - self, - transport=None, - channel=None, - credentials=None, - client_config=None, - client_info=None, - client_options=None, - ): - """Constructor. - - Args: - transport (Union[~.StreamingVideoIntelligenceServiceGrpcTransport, - Callable[[~.Credentials, type], ~.StreamingVideoIntelligenceServiceGrpcTransport]): A transport - instance, responsible for actually making the API calls. - The default transport uses the gRPC protocol. - This argument may also be a callable which returns a - transport instance. Callables will be sent the credentials - as the first argument and the default transport class as - the second argument. - channel (grpc.Channel): DEPRECATED. A ``Channel`` instance - through which to make calls. This argument is mutually exclusive - with ``credentials``; providing both will raise an exception. - credentials (google.auth.credentials.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If none - are specified, the client will attempt to ascertain the - credentials from the environment. - This argument is mutually exclusive with providing a - transport instance to ``transport``; doing so will raise - an exception. - client_config (dict): DEPRECATED. A dictionary of call options for - each method. If not specified, the default configuration is used. - client_info (google.api_core.gapic_v1.client_info.ClientInfo): - The client info used to send a user-agent string along with - API requests. If ``None``, then default info will be used. - Generally, you only need to set this if you're developing - your own client library. - client_options (Union[dict, google.api_core.client_options.ClientOptions]): - Client options used to set user options on the client. API Endpoint - should be set through client_options. - """ - # Raise deprecation warnings for things we want to go away. - if client_config is not None: - warnings.warn( - "The `client_config` argument is deprecated.", - PendingDeprecationWarning, - stacklevel=2, - ) - else: - client_config = streaming_video_intelligence_service_client_config.config - - if channel: - warnings.warn( - "The `channel` argument is deprecated; use " "`transport` instead.", - PendingDeprecationWarning, - stacklevel=2, - ) - - api_endpoint = self.SERVICE_ADDRESS - if client_options: - if type(client_options) == dict: - client_options = google.api_core.client_options.from_dict( - client_options - ) - if client_options.api_endpoint: - api_endpoint = client_options.api_endpoint - - # Instantiate the transport. - # The transport is responsible for handling serialization and - # deserialization and actually sending data to the service. - if transport: - if callable(transport): - self.transport = transport( - credentials=credentials, - default_class=streaming_video_intelligence_service_grpc_transport.StreamingVideoIntelligenceServiceGrpcTransport, - address=api_endpoint, - ) - else: - if credentials: - raise ValueError( - "Received both a transport instance and " - "credentials; these are mutually exclusive." - ) - self.transport = transport - else: - self.transport = streaming_video_intelligence_service_grpc_transport.StreamingVideoIntelligenceServiceGrpcTransport( - address=api_endpoint, channel=channel, credentials=credentials, - ) - - if client_info is None: - client_info = google.api_core.gapic_v1.client_info.ClientInfo( - gapic_version=_GAPIC_LIBRARY_VERSION, - ) - else: - client_info.gapic_version = _GAPIC_LIBRARY_VERSION - self._client_info = client_info - - # Parse out the default settings for retry and timeout for each RPC - # from the client configuration. - # (Ordinarily, these are the defaults specified in the `*_config.py` - # file next to this one.) - self._method_configs = google.api_core.gapic_v1.config.parse_method_configs( - client_config["interfaces"][self._INTERFACE_NAME], - ) - - # Save a dictionary of cached API call functions. - # These are the actual callables which invoke the proper - # transport methods, wrapped with `wrap_method` to add retry, - # timeout, and the like. - self._inner_api_calls = {} - - # Service calls - def streaming_annotate_video( - self, - requests, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Performs video annotation with bidirectional streaming: emitting results - while sending video/audio bytes. - This method is only available via the gRPC API (not REST). - - Example: - >>> from google.cloud import videointelligence_v1p3beta1 - >>> - >>> client = videointelligence_v1p3beta1.StreamingVideoIntelligenceServiceClient() - >>> - >>> request = {} - >>> - >>> requests = [request] - >>> for element in client.streaming_annotate_video(requests): - ... # process element - ... pass - - Args: - requests (iterator[dict|google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2.StreamingAnnotateVideoRequest]): The input objects. If a dict is provided, it must be of the - same form as the protobuf message :class:`~google.cloud.videointelligence_v1p3beta1.types.StreamingAnnotateVideoRequest` - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - Iterable[~google.cloud.videointelligence_v1p3beta1.types.StreamingAnnotateVideoResponse]. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "streaming_annotate_video" not in self._inner_api_calls: - self._inner_api_calls[ - "streaming_annotate_video" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.streaming_annotate_video, - default_retry=self._method_configs["StreamingAnnotateVideo"].retry, - default_timeout=self._method_configs["StreamingAnnotateVideo"].timeout, - client_info=self._client_info, - ) - - return self._inner_api_calls["streaming_annotate_video"]( - requests, retry=retry, timeout=timeout, metadata=metadata - ) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/streaming_video_intelligence_service_client_config.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/streaming_video_intelligence_service_client_config.py deleted file mode 100644 index 0603442496b6..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/streaming_video_intelligence_service_client_config.py +++ /dev/null @@ -1,47 +0,0 @@ -config = { - "interfaces": { - "google.cloud.videointelligence.v1p3beta1.StreamingVideoIntelligenceService": { - "retry_codes": { - "retry_policy_1_codes": ["UNAVAILABLE", "DEADLINE_EXCEEDED"], - "no_retry_codes": [], - "retry_policy_2_codes": ["UNAVAILABLE", "DEADLINE_EXCEEDED"], - }, - "retry_params": { - "retry_policy_1_params": { - "initial_retry_delay_millis": 1000, - "retry_delay_multiplier": 2.5, - "max_retry_delay_millis": 120000, - "initial_rpc_timeout_millis": 600000, - "rpc_timeout_multiplier": 1.0, - "max_rpc_timeout_millis": 600000, - "total_timeout_millis": 600000, - }, - "retry_policy_2_params": { - "initial_retry_delay_millis": 100, - "retry_delay_multiplier": 1.3, - "max_retry_delay_millis": 60000, - "initial_rpc_timeout_millis": 10800000, - "rpc_timeout_multiplier": 1.0, - "max_rpc_timeout_millis": 10800000, - "total_timeout_millis": 10800000, - }, - "no_retry_params": { - "initial_retry_delay_millis": 0, - "retry_delay_multiplier": 0.0, - "max_retry_delay_millis": 0, - "initial_rpc_timeout_millis": 0, - "rpc_timeout_multiplier": 1.0, - "max_rpc_timeout_millis": 0, - "total_timeout_millis": 0, - }, - }, - "methods": { - "StreamingAnnotateVideo": { - "timeout_millis": 10800000, - "retry_codes_name": "retry_policy_2_codes", - "retry_params_name": "retry_policy_2_params", - } - }, - } - } -} diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/transports/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/transports/__init__.py deleted file mode 100644 index e69de29bb2d1..000000000000 diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/transports/streaming_video_intelligence_service_grpc_transport.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/transports/streaming_video_intelligence_service_grpc_transport.py deleted file mode 100644 index 124c9a2154e7..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/transports/streaming_video_intelligence_service_grpc_transport.py +++ /dev/null @@ -1,130 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import google.api_core.grpc_helpers - -from google.cloud.videointelligence_v1p3beta1.proto import video_intelligence_pb2_grpc - - -class StreamingVideoIntelligenceServiceGrpcTransport(object): - """gRPC transport class providing stubs for - google.cloud.videointelligence.v1p3beta1 StreamingVideoIntelligenceService API. - - The transport provides access to the raw gRPC stubs, - which can be used to take advantage of advanced - features of gRPC. - """ - - # The scopes needed to make gRPC calls to all of the methods defined - # in this service. - _OAUTH_SCOPES = ("https://www.googleapis.com/auth/cloud-platform",) - - def __init__( - self, - channel=None, - credentials=None, - address="videointelligence.googleapis.com:443", - ): - """Instantiate the transport class. - - Args: - channel (grpc.Channel): A ``Channel`` instance through - which to make calls. This argument is mutually exclusive - with ``credentials``; providing both will raise an exception. - credentials (google.auth.credentials.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If none - are specified, the client will attempt to ascertain the - credentials from the environment. - address (str): The address where the service is hosted. - """ - # If both `channel` and `credentials` are specified, raise an - # exception (channels come with credentials baked in already). - if channel is not None and credentials is not None: - raise ValueError( - "The `channel` and `credentials` arguments are mutually " "exclusive.", - ) - - # Create the channel. - if channel is None: - channel = self.create_channel( - address=address, - credentials=credentials, - options={ - "grpc.max_send_message_length": -1, - "grpc.max_receive_message_length": -1, - }.items(), - ) - - self._channel = channel - - # gRPC uses objects called "stubs" that are bound to the - # channel and provide a basic method for each RPC. - self._stubs = { - "streaming_video_intelligence_service_stub": video_intelligence_pb2_grpc.StreamingVideoIntelligenceServiceStub( - channel - ), - } - - @classmethod - def create_channel( - cls, address="videointelligence.googleapis.com:443", credentials=None, **kwargs - ): - """Create and return a gRPC channel object. - - Args: - address (str): The host for the channel to use. - credentials (~.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If - none are specified, the client will attempt to ascertain - the credentials from the environment. - kwargs (dict): Keyword arguments, which are passed to the - channel creation. - - Returns: - grpc.Channel: A gRPC channel object. - """ - return google.api_core.grpc_helpers.create_channel( - address, credentials=credentials, scopes=cls._OAUTH_SCOPES, **kwargs - ) - - @property - def channel(self): - """The gRPC channel used by the transport. - - Returns: - grpc.Channel: A gRPC channel object. - """ - return self._channel - - @property - def streaming_annotate_video(self): - """Return the gRPC stub for :meth:`StreamingVideoIntelligenceServiceClient.streaming_annotate_video`. - - Performs video annotation with bidirectional streaming: emitting results - while sending video/audio bytes. - This method is only available via the gRPC API (not REST). - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs[ - "streaming_video_intelligence_service_stub" - ].StreamingAnnotateVideo diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/transports/video_intelligence_service_grpc_transport.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/transports/video_intelligence_service_grpc_transport.py deleted file mode 100644 index f23a6d3ccdc5..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/transports/video_intelligence_service_grpc_transport.py +++ /dev/null @@ -1,137 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import google.api_core.grpc_helpers -import google.api_core.operations_v1 - -from google.cloud.videointelligence_v1p3beta1.proto import video_intelligence_pb2_grpc - - -class VideoIntelligenceServiceGrpcTransport(object): - """gRPC transport class providing stubs for - google.cloud.videointelligence.v1p3beta1 VideoIntelligenceService API. - - The transport provides access to the raw gRPC stubs, - which can be used to take advantage of advanced - features of gRPC. - """ - - # The scopes needed to make gRPC calls to all of the methods defined - # in this service. - _OAUTH_SCOPES = ("https://www.googleapis.com/auth/cloud-platform",) - - def __init__( - self, - channel=None, - credentials=None, - address="videointelligence.googleapis.com:443", - ): - """Instantiate the transport class. - - Args: - channel (grpc.Channel): A ``Channel`` instance through - which to make calls. This argument is mutually exclusive - with ``credentials``; providing both will raise an exception. - credentials (google.auth.credentials.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If none - are specified, the client will attempt to ascertain the - credentials from the environment. - address (str): The address where the service is hosted. - """ - # If both `channel` and `credentials` are specified, raise an - # exception (channels come with credentials baked in already). - if channel is not None and credentials is not None: - raise ValueError( - "The `channel` and `credentials` arguments are mutually " "exclusive.", - ) - - # Create the channel. - if channel is None: - channel = self.create_channel( - address=address, - credentials=credentials, - options={ - "grpc.max_send_message_length": -1, - "grpc.max_receive_message_length": -1, - }.items(), - ) - - self._channel = channel - - # gRPC uses objects called "stubs" that are bound to the - # channel and provide a basic method for each RPC. - self._stubs = { - "video_intelligence_service_stub": video_intelligence_pb2_grpc.VideoIntelligenceServiceStub( - channel - ), - } - - # Because this API includes a method that returns a - # long-running operation (proto: google.longrunning.Operation), - # instantiate an LRO client. - self._operations_client = google.api_core.operations_v1.OperationsClient( - channel - ) - - @classmethod - def create_channel( - cls, address="videointelligence.googleapis.com:443", credentials=None, **kwargs - ): - """Create and return a gRPC channel object. - - Args: - address (str): The host for the channel to use. - credentials (~.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If - none are specified, the client will attempt to ascertain - the credentials from the environment. - kwargs (dict): Keyword arguments, which are passed to the - channel creation. - - Returns: - grpc.Channel: A gRPC channel object. - """ - return google.api_core.grpc_helpers.create_channel( - address, credentials=credentials, scopes=cls._OAUTH_SCOPES, **kwargs - ) - - @property - def channel(self): - """The gRPC channel used by the transport. - - Returns: - grpc.Channel: A gRPC channel object. - """ - return self._channel - - @property - def annotate_video(self): - """Return the gRPC stub for :meth:`VideoIntelligenceServiceClient.annotate_video`. - - Performs asynchronous video annotation. Progress and results can be - retrieved through the ``google.longrunning.Operations`` interface. - ``Operation.metadata`` contains ``AnnotateVideoProgress`` (progress). - ``Operation.response`` contains ``AnnotateVideoResponse`` (results). - - Returns: - Callable: A callable which accepts the appropriate - deserialized request object and returns a - deserialized response object. - """ - return self._stubs["video_intelligence_service_stub"].AnnotateVideo diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/video_intelligence_service_client.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/video_intelligence_service_client.py deleted file mode 100644 index ad989e967668..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/video_intelligence_service_client.py +++ /dev/null @@ -1,309 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Accesses the google.cloud.videointelligence.v1p3beta1 VideoIntelligenceService API.""" - -import pkg_resources -import warnings - -from google.oauth2 import service_account -import google.api_core.client_options -import google.api_core.gapic_v1.client_info -import google.api_core.gapic_v1.config -import google.api_core.gapic_v1.method -import google.api_core.grpc_helpers -import google.api_core.operation -import google.api_core.operations_v1 -import grpc - -from google.cloud.videointelligence_v1p3beta1.gapic import enums -from google.cloud.videointelligence_v1p3beta1.gapic import ( - video_intelligence_service_client_config, -) -from google.cloud.videointelligence_v1p3beta1.gapic.transports import ( - video_intelligence_service_grpc_transport, -) -from google.cloud.videointelligence_v1p3beta1.proto import video_intelligence_pb2 -from google.cloud.videointelligence_v1p3beta1.proto import video_intelligence_pb2_grpc -from google.longrunning import operations_pb2 - - -_GAPIC_LIBRARY_VERSION = pkg_resources.get_distribution( - "google-cloud-videointelligence", -).version - - -class VideoIntelligenceServiceClient(object): - """Service that implements the Video Intelligence API.""" - - SERVICE_ADDRESS = "videointelligence.googleapis.com:443" - """The default address of the service.""" - - # The name of the interface for this client. This is the key used to - # find the method configuration in the client_config dictionary. - _INTERFACE_NAME = ( - "google.cloud.videointelligence.v1p3beta1.VideoIntelligenceService" - ) - - @classmethod - def from_service_account_file(cls, filename, *args, **kwargs): - """Creates an instance of this client using the provided credentials - file. - - Args: - filename (str): The path to the service account private key json - file. - args: Additional arguments to pass to the constructor. - kwargs: Additional arguments to pass to the constructor. - - Returns: - VideoIntelligenceServiceClient: The constructed client. - """ - credentials = service_account.Credentials.from_service_account_file(filename) - kwargs["credentials"] = credentials - return cls(*args, **kwargs) - - from_service_account_json = from_service_account_file - - def __init__( - self, - transport=None, - channel=None, - credentials=None, - client_config=None, - client_info=None, - client_options=None, - ): - """Constructor. - - Args: - transport (Union[~.VideoIntelligenceServiceGrpcTransport, - Callable[[~.Credentials, type], ~.VideoIntelligenceServiceGrpcTransport]): A transport - instance, responsible for actually making the API calls. - The default transport uses the gRPC protocol. - This argument may also be a callable which returns a - transport instance. Callables will be sent the credentials - as the first argument and the default transport class as - the second argument. - channel (grpc.Channel): DEPRECATED. A ``Channel`` instance - through which to make calls. This argument is mutually exclusive - with ``credentials``; providing both will raise an exception. - credentials (google.auth.credentials.Credentials): The - authorization credentials to attach to requests. These - credentials identify this application to the service. If none - are specified, the client will attempt to ascertain the - credentials from the environment. - This argument is mutually exclusive with providing a - transport instance to ``transport``; doing so will raise - an exception. - client_config (dict): DEPRECATED. A dictionary of call options for - each method. If not specified, the default configuration is used. - client_info (google.api_core.gapic_v1.client_info.ClientInfo): - The client info used to send a user-agent string along with - API requests. If ``None``, then default info will be used. - Generally, you only need to set this if you're developing - your own client library. - client_options (Union[dict, google.api_core.client_options.ClientOptions]): - Client options used to set user options on the client. API Endpoint - should be set through client_options. - """ - # Raise deprecation warnings for things we want to go away. - if client_config is not None: - warnings.warn( - "The `client_config` argument is deprecated.", - PendingDeprecationWarning, - stacklevel=2, - ) - else: - client_config = video_intelligence_service_client_config.config - - if channel: - warnings.warn( - "The `channel` argument is deprecated; use " "`transport` instead.", - PendingDeprecationWarning, - stacklevel=2, - ) - - api_endpoint = self.SERVICE_ADDRESS - if client_options: - if type(client_options) == dict: - client_options = google.api_core.client_options.from_dict( - client_options - ) - if client_options.api_endpoint: - api_endpoint = client_options.api_endpoint - - # Instantiate the transport. - # The transport is responsible for handling serialization and - # deserialization and actually sending data to the service. - if transport: - if callable(transport): - self.transport = transport( - credentials=credentials, - default_class=video_intelligence_service_grpc_transport.VideoIntelligenceServiceGrpcTransport, - address=api_endpoint, - ) - else: - if credentials: - raise ValueError( - "Received both a transport instance and " - "credentials; these are mutually exclusive." - ) - self.transport = transport - else: - self.transport = video_intelligence_service_grpc_transport.VideoIntelligenceServiceGrpcTransport( - address=api_endpoint, channel=channel, credentials=credentials, - ) - - if client_info is None: - client_info = google.api_core.gapic_v1.client_info.ClientInfo( - gapic_version=_GAPIC_LIBRARY_VERSION, - ) - else: - client_info.gapic_version = _GAPIC_LIBRARY_VERSION - self._client_info = client_info - - # Parse out the default settings for retry and timeout for each RPC - # from the client configuration. - # (Ordinarily, these are the defaults specified in the `*_config.py` - # file next to this one.) - self._method_configs = google.api_core.gapic_v1.config.parse_method_configs( - client_config["interfaces"][self._INTERFACE_NAME], - ) - - # Save a dictionary of cached API call functions. - # These are the actual callables which invoke the proper - # transport methods, wrapped with `wrap_method` to add retry, - # timeout, and the like. - self._inner_api_calls = {} - - # Service calls - def annotate_video( - self, - features, - input_uri=None, - input_content=None, - video_context=None, - output_uri=None, - location_id=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None, - ): - """ - Performs asynchronous video annotation. Progress and results can be - retrieved through the ``google.longrunning.Operations`` interface. - ``Operation.metadata`` contains ``AnnotateVideoProgress`` (progress). - ``Operation.response`` contains ``AnnotateVideoResponse`` (results). - - Example: - >>> from google.cloud import videointelligence_v1p3beta1 - >>> from google.cloud.videointelligence_v1p3beta1 import enums - >>> - >>> client = videointelligence_v1p3beta1.VideoIntelligenceServiceClient() - >>> - >>> features_element = enums.Feature.LABEL_DETECTION - >>> features = [features_element] - >>> input_uri = 'gs://cloud-samples-data/video/cat.mp4' - >>> - >>> response = client.annotate_video(features, input_uri=input_uri) - >>> - >>> def callback(operation_future): - ... # Handle result. - ... result = operation_future.result() - >>> - >>> response.add_done_callback(callback) - >>> - >>> # Handle metadata. - >>> metadata = response.metadata() - - Args: - features (list[~google.cloud.videointelligence_v1p3beta1.types.Feature]): Required. Requested video annotation features. - input_uri (str): Input video location. Currently, only `Cloud - Storage `__ URIs are supported. URIs - must be specified in the following format: ``gs://bucket-id/object-id`` - (other URI formats return ``google.rpc.Code.INVALID_ARGUMENT``). For - more information, see `Request - URIs `__. To - identify multiple videos, a video URI may include wildcards in the - ``object-id``. Supported wildcards: '*' to match 0 or more characters; - '?' to match 1 character. If unset, the input video should be embedded - in the request as ``input_content``. If set, ``input_content`` must be - unset. - input_content (bytes): The video data bytes. If unset, the input video(s) should be - specified via the ``input_uri``. If set, ``input_uri`` must be unset. - video_context (Union[dict, ~google.cloud.videointelligence_v1p3beta1.types.VideoContext]): Additional video context and/or feature-specific parameters. - - If a dict is provided, it must be of the same form as the protobuf - message :class:`~google.cloud.videointelligence_v1p3beta1.types.VideoContext` - output_uri (str): Optional. Location where the output (in JSON format) should be - stored. Currently, only `Cloud - Storage `__ URIs are supported. These - must be specified in the following format: ``gs://bucket-id/object-id`` - (other URI formats return ``google.rpc.Code.INVALID_ARGUMENT``). For - more information, see `Request - URIs `__. - location_id (str): Optional. Cloud region where annotation should take place. Supported - cloud regions are: ``us-east1``, ``us-west1``, ``europe-west1``, - ``asia-east1``. If no region is specified, the region will be determined - based on video file location. - retry (Optional[google.api_core.retry.Retry]): A retry object used - to retry requests. If ``None`` is specified, requests will - be retried using a default configuration. - timeout (Optional[float]): The amount of time, in seconds, to wait - for the request to complete. Note that if ``retry`` is - specified, the timeout applies to each individual attempt. - metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata - that is provided to the method. - - Returns: - A :class:`~google.cloud.videointelligence_v1p3beta1.types._OperationFuture` instance. - - Raises: - google.api_core.exceptions.GoogleAPICallError: If the request - failed for any reason. - google.api_core.exceptions.RetryError: If the request failed due - to a retryable error and retry attempts failed. - ValueError: If the parameters are invalid. - """ - # Wrap the transport method to add retry and timeout logic. - if "annotate_video" not in self._inner_api_calls: - self._inner_api_calls[ - "annotate_video" - ] = google.api_core.gapic_v1.method.wrap_method( - self.transport.annotate_video, - default_retry=self._method_configs["AnnotateVideo"].retry, - default_timeout=self._method_configs["AnnotateVideo"].timeout, - client_info=self._client_info, - ) - - request = video_intelligence_pb2.AnnotateVideoRequest( - features=features, - input_uri=input_uri, - input_content=input_content, - video_context=video_context, - output_uri=output_uri, - location_id=location_id, - ) - operation = self._inner_api_calls["annotate_video"]( - request, retry=retry, timeout=timeout, metadata=metadata - ) - return google.api_core.operation.from_gapic( - operation, - self.transport._operations_client, - video_intelligence_pb2.AnnotateVideoResponse, - metadata_type=video_intelligence_pb2.AnnotateVideoProgress, - ) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/video_intelligence_service_client_config.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/video_intelligence_service_client_config.py deleted file mode 100644 index beb8770d42ee..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/gapic/video_intelligence_service_client_config.py +++ /dev/null @@ -1,37 +0,0 @@ -config = { - "interfaces": { - "google.cloud.videointelligence.v1p3beta1.VideoIntelligenceService": { - "retry_codes": { - "retry_policy_1_codes": ["UNAVAILABLE", "DEADLINE_EXCEEDED"], - "no_retry_codes": [], - }, - "retry_params": { - "retry_policy_1_params": { - "initial_retry_delay_millis": 1000, - "retry_delay_multiplier": 2.5, - "max_retry_delay_millis": 120000, - "initial_rpc_timeout_millis": 600000, - "rpc_timeout_multiplier": 1.0, - "max_rpc_timeout_millis": 600000, - "total_timeout_millis": 600000, - }, - "no_retry_params": { - "initial_retry_delay_millis": 0, - "retry_delay_multiplier": 0.0, - "max_retry_delay_millis": 0, - "initial_rpc_timeout_millis": 0, - "rpc_timeout_multiplier": 1.0, - "max_rpc_timeout_millis": 0, - "total_timeout_millis": 0, - }, - }, - "methods": { - "AnnotateVideo": { - "timeout_millis": 600000, - "retry_codes_name": "retry_policy_1_codes", - "retry_params_name": "retry_policy_1_params", - } - }, - } - } -} diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/proto/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/proto/__init__.py deleted file mode 100644 index e69de29bb2d1..000000000000 diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/proto/video_intelligence.proto b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/proto/video_intelligence.proto deleted file mode 100644 index 6284e0db3847..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/proto/video_intelligence.proto +++ /dev/null @@ -1,1089 +0,0 @@ -// Copyright 2020 Google LLC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -syntax = "proto3"; - -package google.cloud.videointelligence.v1p3beta1; - -import "google/api/annotations.proto"; -import "google/api/client.proto"; -import "google/api/field_behavior.proto"; -import "google/longrunning/operations.proto"; -import "google/protobuf/duration.proto"; -import "google/protobuf/timestamp.proto"; -import "google/rpc/status.proto"; - -option csharp_namespace = "Google.Cloud.VideoIntelligence.V1P3Beta1"; -option go_package = "google.golang.org/genproto/googleapis/cloud/videointelligence/v1p3beta1;videointelligence"; -option java_multiple_files = true; -option java_outer_classname = "VideoIntelligenceServiceProto"; -option java_package = "com.google.cloud.videointelligence.v1p3beta1"; -option php_namespace = "Google\\Cloud\\VideoIntelligence\\V1p3beta1"; - -// Service that implements the Video Intelligence API. -service VideoIntelligenceService { - option (google.api.default_host) = "videointelligence.googleapis.com"; - option (google.api.oauth_scopes) = - "https://www.googleapis.com/auth/cloud-platform"; - - // Performs asynchronous video annotation. Progress and results can be - // retrieved through the `google.longrunning.Operations` interface. - // `Operation.metadata` contains `AnnotateVideoProgress` (progress). - // `Operation.response` contains `AnnotateVideoResponse` (results). - rpc AnnotateVideo(AnnotateVideoRequest) - returns (google.longrunning.Operation) { - option (google.api.http) = { - post: "/v1p3beta1/videos:annotate" - body: "*" - }; - option (google.api.method_signature) = "input_uri,features"; - option (google.longrunning.operation_info) = { - response_type: "AnnotateVideoResponse" - metadata_type: "AnnotateVideoProgress" - }; - } -} - -// Service that implements streaming Video Intelligence API. -service StreamingVideoIntelligenceService { - option (google.api.default_host) = "videointelligence.googleapis.com"; - option (google.api.oauth_scopes) = - "https://www.googleapis.com/auth/cloud-platform"; - - // Performs video annotation with bidirectional streaming: emitting results - // while sending video/audio bytes. - // This method is only available via the gRPC API (not REST). - rpc StreamingAnnotateVideo(stream StreamingAnnotateVideoRequest) - returns (stream StreamingAnnotateVideoResponse) {} -} - -// Video annotation request. -message AnnotateVideoRequest { - // Input video location. Currently, only - // [Cloud Storage](https://cloud.google.com/storage/) URIs are - // supported. URIs must be specified in the following format: - // `gs://bucket-id/object-id` (other URI formats return - // [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For - // more information, see [Request - // URIs](https://cloud.google.com/storage/docs/request-endpoints). To identify - // multiple videos, a video URI may include wildcards in the `object-id`. - // Supported wildcards: '*' to match 0 or more characters; - // '?' to match 1 character. If unset, the input video should be embedded - // in the request as `input_content`. If set, `input_content` must be unset. - string input_uri = 1; - - // The video data bytes. - // If unset, the input video(s) should be specified via the `input_uri`. - // If set, `input_uri` must be unset. - bytes input_content = 6; - - // Required. Requested video annotation features. - repeated Feature features = 2 [(google.api.field_behavior) = REQUIRED]; - - // Additional video context and/or feature-specific parameters. - VideoContext video_context = 3; - - // Optional. Location where the output (in JSON format) should be stored. - // Currently, only [Cloud Storage](https://cloud.google.com/storage/) - // URIs are supported. These must be specified in the following format: - // `gs://bucket-id/object-id` (other URI formats return - // [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For - // more information, see [Request - // URIs](https://cloud.google.com/storage/docs/request-endpoints). - string output_uri = 4 [(google.api.field_behavior) = OPTIONAL]; - - // Optional. Cloud region where annotation should take place. Supported cloud - // regions are: `us-east1`, `us-west1`, `europe-west1`, `asia-east1`. If no - // region is specified, the region will be determined based on video file - // location. - string location_id = 5 [(google.api.field_behavior) = OPTIONAL]; -} - -// Video context and/or feature-specific parameters. -message VideoContext { - // Video segments to annotate. The segments may overlap and are not required - // to be contiguous or span the whole video. If unspecified, each video is - // treated as a single segment. - repeated VideoSegment segments = 1; - - // Config for LABEL_DETECTION. - LabelDetectionConfig label_detection_config = 2; - - // Config for SHOT_CHANGE_DETECTION. - ShotChangeDetectionConfig shot_change_detection_config = 3; - - // Config for EXPLICIT_CONTENT_DETECTION. - ExplicitContentDetectionConfig explicit_content_detection_config = 4; - - // Config for FACE_DETECTION. - FaceDetectionConfig face_detection_config = 5; - - // Config for SPEECH_TRANSCRIPTION. - SpeechTranscriptionConfig speech_transcription_config = 6; - - // Config for TEXT_DETECTION. - TextDetectionConfig text_detection_config = 8; - - // Config for PERSON_DETECTION. - PersonDetectionConfig person_detection_config = 11; - - // Config for OBJECT_TRACKING. - ObjectTrackingConfig object_tracking_config = 13; -} - -// Label detection mode. -enum LabelDetectionMode { - // Unspecified. - LABEL_DETECTION_MODE_UNSPECIFIED = 0; - - // Detect shot-level labels. - SHOT_MODE = 1; - - // Detect frame-level labels. - FRAME_MODE = 2; - - // Detect both shot-level and frame-level labels. - SHOT_AND_FRAME_MODE = 3; -} - -// Bucketized representation of likelihood. -enum Likelihood { - // Unspecified likelihood. - LIKELIHOOD_UNSPECIFIED = 0; - - // Very unlikely. - VERY_UNLIKELY = 1; - - // Unlikely. - UNLIKELY = 2; - - // Possible. - POSSIBLE = 3; - - // Likely. - LIKELY = 4; - - // Very likely. - VERY_LIKELY = 5; -} - -// Config for LABEL_DETECTION. -message LabelDetectionConfig { - // What labels should be detected with LABEL_DETECTION, in addition to - // video-level labels or segment-level labels. - // If unspecified, defaults to `SHOT_MODE`. - LabelDetectionMode label_detection_mode = 1; - - // Whether the video has been shot from a stationary (i.e., non-moving) - // camera. When set to true, might improve detection accuracy for moving - // objects. Should be used with `SHOT_AND_FRAME_MODE` enabled. - bool stationary_camera = 2; - - // Model to use for label detection. - // Supported values: "builtin/stable" (the default if unset) and - // "builtin/latest". - string model = 3; - - // The confidence threshold we perform filtering on the labels from - // frame-level detection. If not set, it is set to 0.4 by default. The valid - // range for this threshold is [0.1, 0.9]. Any value set outside of this - // range will be clipped. - // Note: For best results, follow the default threshold. We will update - // the default threshold everytime when we release a new model. - float frame_confidence_threshold = 4; - - // The confidence threshold we perform filtering on the labels from - // video-level and shot-level detections. If not set, it's set to 0.3 by - // default. The valid range for this threshold is [0.1, 0.9]. Any value set - // outside of this range will be clipped. - // Note: For best results, follow the default threshold. We will update - // the default threshold everytime when we release a new model. - float video_confidence_threshold = 5; -} - -// Streaming video annotation feature. -enum StreamingFeature { - // Unspecified. - STREAMING_FEATURE_UNSPECIFIED = 0; - - // Label detection. Detect objects, such as dog or flower. - STREAMING_LABEL_DETECTION = 1; - - // Shot change detection. - STREAMING_SHOT_CHANGE_DETECTION = 2; - - // Explicit content detection. - STREAMING_EXPLICIT_CONTENT_DETECTION = 3; - - // Object detection and tracking. - STREAMING_OBJECT_TRACKING = 4; - - // Action recognition based on AutoML model. - STREAMING_AUTOML_ACTION_RECOGNITION = 23; - - // Video classification based on AutoML model. - STREAMING_AUTOML_CLASSIFICATION = 21; - - // Object detection and tracking based on AutoML model. - STREAMING_AUTOML_OBJECT_TRACKING = 22; -} - -// Video annotation feature. -enum Feature { - // Unspecified. - FEATURE_UNSPECIFIED = 0; - - // Label detection. Detect objects, such as dog or flower. - LABEL_DETECTION = 1; - - // Shot change detection. - SHOT_CHANGE_DETECTION = 2; - - // Explicit content detection. - EXPLICIT_CONTENT_DETECTION = 3; - - // Human face detection. - FACE_DETECTION = 4; - - // Speech transcription. - SPEECH_TRANSCRIPTION = 6; - - // OCR text detection and tracking. - TEXT_DETECTION = 7; - - // Object detection and tracking. - OBJECT_TRACKING = 9; - - // Logo detection, tracking, and recognition. - LOGO_RECOGNITION = 12; - - // Celebrity recognition. - CELEBRITY_RECOGNITION = 13; - - // Person detection. - PERSON_DETECTION = 14; -} - -// Config for SHOT_CHANGE_DETECTION. -message ShotChangeDetectionConfig { - // Model to use for shot change detection. - // Supported values: "builtin/stable" (the default if unset) and - // "builtin/latest". - string model = 1; -} - -// Config for OBJECT_TRACKING. -message ObjectTrackingConfig { - // Model to use for object tracking. - // Supported values: "builtin/stable" (the default if unset) and - // "builtin/latest". - string model = 1; -} - -// Config for EXPLICIT_CONTENT_DETECTION. -message ExplicitContentDetectionConfig { - // Model to use for explicit content detection. - // Supported values: "builtin/stable" (the default if unset) and - // "builtin/latest". - string model = 1; -} - -// Config for FACE_DETECTION. -message FaceDetectionConfig { - // Model to use for face detection. - // Supported values: "builtin/stable" (the default if unset) and - // "builtin/latest". - string model = 1; - - // Whether bounding boxes are included in the face annotation output. - bool include_bounding_boxes = 2; - - // Whether to enable face attributes detection, such as glasses, dark_glasses, - // mouth_open etc. Ignored if 'include_bounding_boxes' is set to false. - bool include_attributes = 5; -} - -// Config for PERSON_DETECTION. -message PersonDetectionConfig { - // Whether bounding boxes are included in the person detection annotation - // output. - bool include_bounding_boxes = 1; - - // Whether to enable pose landmarks detection. Ignored if - // 'include_bounding_boxes' is set to false. - bool include_pose_landmarks = 2; - - // Whether to enable person attributes detection, such as cloth color (black, - // blue, etc), type (coat, dress, etc), pattern (plain, floral, etc), hair, - // etc. - // Ignored if 'include_bounding_boxes' is set to false. - bool include_attributes = 3; -} - -// Config for TEXT_DETECTION. -message TextDetectionConfig { - // Language hint can be specified if the language to be detected is known a - // priori. It can increase the accuracy of the detection. Language hint must - // be language code in BCP-47 format. - // - // Automatic language detection is performed if no hint is provided. - repeated string language_hints = 1; - - // Model to use for text detection. - // Supported values: "builtin/stable" (the default if unset) and - // "builtin/latest". - string model = 2; -} - -// Video segment. -message VideoSegment { - // Time-offset, relative to the beginning of the video, - // corresponding to the start of the segment (inclusive). - google.protobuf.Duration start_time_offset = 1; - - // Time-offset, relative to the beginning of the video, - // corresponding to the end of the segment (inclusive). - google.protobuf.Duration end_time_offset = 2; -} - -// Video segment level annotation results for label detection. -message LabelSegment { - // Video segment where a label was detected. - VideoSegment segment = 1; - - // Confidence that the label is accurate. Range: [0, 1]. - float confidence = 2; -} - -// Video frame level annotation results for label detection. -message LabelFrame { - // Time-offset, relative to the beginning of the video, corresponding to the - // video frame for this location. - google.protobuf.Duration time_offset = 1; - - // Confidence that the label is accurate. Range: [0, 1]. - float confidence = 2; -} - -// Detected entity from video analysis. -message Entity { - // Opaque entity ID. Some IDs may be available in - // [Google Knowledge Graph Search - // API](https://developers.google.com/knowledge-graph/). - string entity_id = 1; - - // Textual description, e.g., `Fixed-gear bicycle`. - string description = 2; - - // Language code for `description` in BCP-47 format. - string language_code = 3; -} - -// Label annotation. -message LabelAnnotation { - // Detected entity. - Entity entity = 1; - - // Common categories for the detected entity. - // For example, when the label is `Terrier`, the category is likely `dog`. And - // in some cases there might be more than one categories e.g., `Terrier` could - // also be a `pet`. - repeated Entity category_entities = 2; - - // All video segments where a label was detected. - repeated LabelSegment segments = 3; - - // All video frames where a label was detected. - repeated LabelFrame frames = 4; -} - -// Video frame level annotation results for explicit content. -message ExplicitContentFrame { - // Time-offset, relative to the beginning of the video, corresponding to the - // video frame for this location. - google.protobuf.Duration time_offset = 1; - - // Likelihood of the pornography content.. - Likelihood pornography_likelihood = 2; -} - -// Explicit content annotation (based on per-frame visual signals only). -// If no explicit content has been detected in a frame, no annotations are -// present for that frame. -message ExplicitContentAnnotation { - // All video frames where explicit content was detected. - repeated ExplicitContentFrame frames = 1; -} - -// Normalized bounding box. -// The normalized vertex coordinates are relative to the original image. -// Range: [0, 1]. -message NormalizedBoundingBox { - // Left X coordinate. - float left = 1; - - // Top Y coordinate. - float top = 2; - - // Right X coordinate. - float right = 3; - - // Bottom Y coordinate. - float bottom = 4; -} - -// For tracking related features. -// An object at time_offset with attributes, and located with -// normalized_bounding_box. -message TimestampedObject { - // Normalized Bounding box in a frame, where the object is located. - NormalizedBoundingBox normalized_bounding_box = 1; - - // Time-offset, relative to the beginning of the video, - // corresponding to the video frame for this object. - google.protobuf.Duration time_offset = 2; - - // Optional. The attributes of the object in the bounding box. - repeated DetectedAttribute attributes = 3 - [(google.api.field_behavior) = OPTIONAL]; - - // Optional. The detected landmarks. - repeated DetectedLandmark landmarks = 4 - [(google.api.field_behavior) = OPTIONAL]; -} - -// A track of an object instance. -message Track { - // Video segment of a track. - VideoSegment segment = 1; - - // The object with timestamp and attributes per frame in the track. - repeated TimestampedObject timestamped_objects = 2; - - // Optional. Attributes in the track level. - repeated DetectedAttribute attributes = 3 - [(google.api.field_behavior) = OPTIONAL]; - - // Optional. The confidence score of the tracked object. - float confidence = 4 [(google.api.field_behavior) = OPTIONAL]; -} - -// A generic detected attribute represented by name in string format. -message DetectedAttribute { - // The name of the attribute, for example, glasses, dark_glasses, mouth_open. - // A full list of supported type names will be provided in the document. - string name = 1; - - // Detected attribute confidence. Range [0, 1]. - float confidence = 2; - - // Text value of the detection result. For example, the value for "HairColor" - // can be "black", "blonde", etc. - string value = 3; -} - -// Celebrity definition. -message Celebrity { - // The resource name of the celebrity. Have the format - // `video-intelligence/kg-mid` indicates a celebrity from preloaded gallery. - // kg-mid is the id in Google knowledge graph, which is unique for the - // celebrity. - string name = 1; - - // The celebrity name. - string display_name = 2; - - // Textual description of additional information about the celebrity, if - // applicable. - string description = 3; -} - -// The annotation result of a celebrity face track. RecognizedCelebrity field -// could be empty if the face track does not have any matched celebrities. -message CelebrityTrack { - // The recognized celebrity with confidence score. - message RecognizedCelebrity { - // The recognized celebrity. - Celebrity celebrity = 1; - - // Recognition confidence. Range [0, 1]. - float confidence = 2; - } - - // Top N match of the celebrities for the face in this track. - repeated RecognizedCelebrity celebrities = 1; - - // A track of a person's face. - Track face_track = 3; -} - -// Celebrity recognition annotation per video. -message CelebrityRecognitionAnnotation { - // The tracks detected from the input video, including recognized celebrities - // and other detected faces in the video. - repeated CelebrityTrack celebrity_tracks = 1; -} - -// A generic detected landmark represented by name in string format and a 2D -// location. -message DetectedLandmark { - // The name of this landmark, for example, left_hand, right_shoulder. - string name = 1; - - // The 2D point of the detected landmark using the normalized image - // coordindate system. The normalized coordinates have the range from 0 to 1. - NormalizedVertex point = 2; - - // The confidence score of the detected landmark. Range [0, 1]. - float confidence = 3; -} - -// Face detection annotation. -message FaceDetectionAnnotation { - // The face tracks with attributes. - repeated Track tracks = 3; - - // The thumbnail of a person's face. - bytes thumbnail = 4; -} - -// Person detection annotation per video. -message PersonDetectionAnnotation { - // The detected tracks of a person. - repeated Track tracks = 1; -} - -// Annotation results for a single video. -message VideoAnnotationResults { - // Video file location in - // [Cloud Storage](https://cloud.google.com/storage/). - string input_uri = 1; - - // Video segment on which the annotation is run. - VideoSegment segment = 10; - - // Topical label annotations on video level or user-specified segment level. - // There is exactly one element for each unique label. - repeated LabelAnnotation segment_label_annotations = 2; - - // Presence label annotations on video level or user-specified segment level. - // There is exactly one element for each unique label. Compared to the - // existing topical `segment_label_annotations`, this field presents more - // fine-grained, segment-level labels detected in video content and is made - // available only when the client sets `LabelDetectionConfig.model` to - // "builtin/latest" in the request. - repeated LabelAnnotation segment_presence_label_annotations = 23; - - // Topical label annotations on shot level. - // There is exactly one element for each unique label. - repeated LabelAnnotation shot_label_annotations = 3; - - // Presence label annotations on shot level. There is exactly one element for - // each unique label. Compared to the existing topical - // `shot_label_annotations`, this field presents more fine-grained, shot-level - // labels detected in video content and is made available only when the client - // sets `LabelDetectionConfig.model` to "builtin/latest" in the request. - repeated LabelAnnotation shot_presence_label_annotations = 24; - - // Label annotations on frame level. - // There is exactly one element for each unique label. - repeated LabelAnnotation frame_label_annotations = 4; - - // Face detection annotations. - repeated FaceDetectionAnnotation face_detection_annotations = 13; - - // Shot annotations. Each shot is represented as a video segment. - repeated VideoSegment shot_annotations = 6; - - // Explicit content annotation. - ExplicitContentAnnotation explicit_annotation = 7; - - // Speech transcription. - repeated SpeechTranscription speech_transcriptions = 11; - - // OCR text detection and tracking. - // Annotations for list of detected text snippets. Each will have list of - // frame information associated with it. - repeated TextAnnotation text_annotations = 12; - - // Annotations for list of objects detected and tracked in video. - repeated ObjectTrackingAnnotation object_annotations = 14; - - // Annotations for list of logos detected, tracked and recognized in video. - repeated LogoRecognitionAnnotation logo_recognition_annotations = 19; - - // Person detection annotations. - repeated PersonDetectionAnnotation person_detection_annotations = 20; - - // Celebrity recognition annotations. - CelebrityRecognitionAnnotation celebrity_recognition_annotations = 21; - - // If set, indicates an error. Note that for a single `AnnotateVideoRequest` - // some videos may succeed and some may fail. - google.rpc.Status error = 9; -} - -// Video annotation response. Included in the `response` -// field of the `Operation` returned by the `GetOperation` -// call of the `google::longrunning::Operations` service. -message AnnotateVideoResponse { - // Annotation results for all videos specified in `AnnotateVideoRequest`. - repeated VideoAnnotationResults annotation_results = 1; -} - -// Annotation progress for a single video. -message VideoAnnotationProgress { - // Video file location in - // [Cloud Storage](https://cloud.google.com/storage/). - string input_uri = 1; - - // Approximate percentage processed thus far. Guaranteed to be - // 100 when fully processed. - int32 progress_percent = 2; - - // Time when the request was received. - google.protobuf.Timestamp start_time = 3; - - // Time of the most recent update. - google.protobuf.Timestamp update_time = 4; - - // Specifies which feature is being tracked if the request contains more than - // one feature. - Feature feature = 5; - - // Specifies which segment is being tracked if the request contains more than - // one segment. - VideoSegment segment = 6; -} - -// Video annotation progress. Included in the `metadata` -// field of the `Operation` returned by the `GetOperation` -// call of the `google::longrunning::Operations` service. -message AnnotateVideoProgress { - // Progress metadata for all videos specified in `AnnotateVideoRequest`. - repeated VideoAnnotationProgress annotation_progress = 1; -} - -// Config for SPEECH_TRANSCRIPTION. -message SpeechTranscriptionConfig { - // Required. *Required* The language of the supplied audio as a - // [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag. - // Example: "en-US". - // See [Language Support](https://cloud.google.com/speech/docs/languages) - // for a list of the currently supported language codes. - string language_code = 1 [(google.api.field_behavior) = REQUIRED]; - - // Optional. Maximum number of recognition hypotheses to be returned. - // Specifically, the maximum number of `SpeechRecognitionAlternative` messages - // within each `SpeechTranscription`. The server may return fewer than - // `max_alternatives`. Valid values are `0`-`30`. A value of `0` or `1` will - // return a maximum of one. If omitted, will return a maximum of one. - int32 max_alternatives = 2 [(google.api.field_behavior) = OPTIONAL]; - - // Optional. If set to `true`, the server will attempt to filter out - // profanities, replacing all but the initial character in each filtered word - // with asterisks, e.g. "f***". If set to `false` or omitted, profanities - // won't be filtered out. - bool filter_profanity = 3 [(google.api.field_behavior) = OPTIONAL]; - - // Optional. A means to provide context to assist the speech recognition. - repeated SpeechContext speech_contexts = 4 - [(google.api.field_behavior) = OPTIONAL]; - - // Optional. If 'true', adds punctuation to recognition result hypotheses. - // This feature is only available in select languages. Setting this for - // requests in other languages has no effect at all. The default 'false' value - // does not add punctuation to result hypotheses. NOTE: "This is currently - // offered as an experimental service, complimentary to all users. In the - // future this may be exclusively available as a premium feature." - bool enable_automatic_punctuation = 5 - [(google.api.field_behavior) = OPTIONAL]; - - // Optional. For file formats, such as MXF or MKV, supporting multiple audio - // tracks, specify up to two tracks. Default: track 0. - repeated int32 audio_tracks = 6 [(google.api.field_behavior) = OPTIONAL]; - - // Optional. If 'true', enables speaker detection for each recognized word in - // the top alternative of the recognition result using a speaker_tag provided - // in the WordInfo. - // Note: When this is true, we send all the words from the beginning of the - // audio for the top alternative in every consecutive response. - // This is done in order to improve our speaker tags as our models learn to - // identify the speakers in the conversation over time. - bool enable_speaker_diarization = 7 [(google.api.field_behavior) = OPTIONAL]; - - // Optional. If set, specifies the estimated number of speakers in the - // conversation. If not set, defaults to '2'. Ignored unless - // enable_speaker_diarization is set to true. - int32 diarization_speaker_count = 8 [(google.api.field_behavior) = OPTIONAL]; - - // Optional. If `true`, the top result includes a list of words and the - // confidence for those words. If `false`, no word-level confidence - // information is returned. The default is `false`. - bool enable_word_confidence = 9 [(google.api.field_behavior) = OPTIONAL]; -} - -// Provides "hints" to the speech recognizer to favor specific words and phrases -// in the results. -message SpeechContext { - // Optional. A list of strings containing words and phrases "hints" so that - // the speech recognition is more likely to recognize them. This can be used - // to improve the accuracy for specific words and phrases, for example, if - // specific commands are typically spoken by the user. This can also be used - // to add additional words to the vocabulary of the recognizer. See - // [usage limits](https://cloud.google.com/speech/limits#content). - repeated string phrases = 1 [(google.api.field_behavior) = OPTIONAL]; -} - -// A speech recognition result corresponding to a portion of the audio. -message SpeechTranscription { - // May contain one or more recognition hypotheses (up to the maximum specified - // in `max_alternatives`). These alternatives are ordered in terms of - // accuracy, with the top (first) alternative being the most probable, as - // ranked by the recognizer. - repeated SpeechRecognitionAlternative alternatives = 1; - - // Output only. The [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) - // language tag of the language in this result. This language code was - // detected to have the most likelihood of being spoken in the audio. - string language_code = 2 [(google.api.field_behavior) = OUTPUT_ONLY]; -} - -// Alternative hypotheses (a.k.a. n-best list). -message SpeechRecognitionAlternative { - // Transcript text representing the words that the user spoke. - string transcript = 1; - - // Output only. The confidence estimate between 0.0 and 1.0. A higher number - // indicates an estimated greater likelihood that the recognized words are - // correct. This field is set only for the top alternative. - // This field is not guaranteed to be accurate and users should not rely on it - // to be always provided. - // The default of 0.0 is a sentinel value indicating `confidence` was not set. - float confidence = 2 [(google.api.field_behavior) = OUTPUT_ONLY]; - - // Output only. A list of word-specific information for each recognized word. - // Note: When `enable_speaker_diarization` is set to true, you will see all - // the words from the beginning of the audio. - repeated WordInfo words = 3 [(google.api.field_behavior) = OUTPUT_ONLY]; -} - -// Word-specific information for recognized words. Word information is only -// included in the response when certain request parameters are set, such -// as `enable_word_time_offsets`. -message WordInfo { - // Time offset relative to the beginning of the audio, and - // corresponding to the start of the spoken word. This field is only set if - // `enable_word_time_offsets=true` and only in the top hypothesis. This is an - // experimental feature and the accuracy of the time offset can vary. - google.protobuf.Duration start_time = 1; - - // Time offset relative to the beginning of the audio, and - // corresponding to the end of the spoken word. This field is only set if - // `enable_word_time_offsets=true` and only in the top hypothesis. This is an - // experimental feature and the accuracy of the time offset can vary. - google.protobuf.Duration end_time = 2; - - // The word corresponding to this set of information. - string word = 3; - - // Output only. The confidence estimate between 0.0 and 1.0. A higher number - // indicates an estimated greater likelihood that the recognized words are - // correct. This field is set only for the top alternative. - // This field is not guaranteed to be accurate and users should not rely on it - // to be always provided. - // The default of 0.0 is a sentinel value indicating `confidence` was not set. - float confidence = 4 [(google.api.field_behavior) = OUTPUT_ONLY]; - - // Output only. A distinct integer value is assigned for every speaker within - // the audio. This field specifies which one of those speakers was detected to - // have spoken this word. Value ranges from 1 up to diarization_speaker_count, - // and is only set if speaker diarization is enabled. - int32 speaker_tag = 5 [(google.api.field_behavior) = OUTPUT_ONLY]; -} - -// A vertex represents a 2D point in the image. -// NOTE: the normalized vertex coordinates are relative to the original image -// and range from 0 to 1. -message NormalizedVertex { - // X coordinate. - float x = 1; - - // Y coordinate. - float y = 2; -} - -// Normalized bounding polygon for text (that might not be aligned with axis). -// Contains list of the corner points in clockwise order starting from -// top-left corner. For example, for a rectangular bounding box: -// When the text is horizontal it might look like: -// 0----1 -// | | -// 3----2 -// -// When it's clockwise rotated 180 degrees around the top-left corner it -// becomes: -// 2----3 -// | | -// 1----0 -// -// and the vertex order will still be (0, 1, 2, 3). Note that values can be less -// than 0, or greater than 1 due to trignometric calculations for location of -// the box. -message NormalizedBoundingPoly { - // Normalized vertices of the bounding polygon. - repeated NormalizedVertex vertices = 1; -} - -// Video segment level annotation results for text detection. -message TextSegment { - // Video segment where a text snippet was detected. - VideoSegment segment = 1; - - // Confidence for the track of detected text. It is calculated as the highest - // over all frames where OCR detected text appears. - float confidence = 2; - - // Information related to the frames where OCR detected text appears. - repeated TextFrame frames = 3; -} - -// Video frame level annotation results for text annotation (OCR). -// Contains information regarding timestamp and bounding box locations for the -// frames containing detected OCR text snippets. -message TextFrame { - // Bounding polygon of the detected text for this frame. - NormalizedBoundingPoly rotated_bounding_box = 1; - - // Timestamp of this frame. - google.protobuf.Duration time_offset = 2; -} - -// Annotations related to one detected OCR text snippet. This will contain the -// corresponding text, confidence value, and frame level information for each -// detection. -message TextAnnotation { - // The detected text. - string text = 1; - - // All video segments where OCR detected text appears. - repeated TextSegment segments = 2; -} - -// Video frame level annotations for object detection and tracking. This field -// stores per frame location, time offset, and confidence. -message ObjectTrackingFrame { - // The normalized bounding box location of this object track for the frame. - NormalizedBoundingBox normalized_bounding_box = 1; - - // The timestamp of the frame in microseconds. - google.protobuf.Duration time_offset = 2; -} - -// Annotations corresponding to one tracked object. -message ObjectTrackingAnnotation { - // Different representation of tracking info in non-streaming batch - // and streaming modes. - oneof track_info { - // Non-streaming batch mode ONLY. - // Each object track corresponds to one video segment where it appears. - VideoSegment segment = 3; - - // Streaming mode ONLY. - // In streaming mode, we do not know the end time of a tracked object - // before it is completed. Hence, there is no VideoSegment info returned. - // Instead, we provide a unique identifiable integer track_id so that - // the customers can correlate the results of the ongoing - // ObjectTrackAnnotation of the same track_id over time. - int64 track_id = 5; - } - - // Entity to specify the object category that this track is labeled as. - Entity entity = 1; - - // Object category's labeling confidence of this track. - float confidence = 4; - - // Information corresponding to all frames where this object track appears. - // Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame - // messages in frames. - // Streaming mode: it can only be one ObjectTrackingFrame message in frames. - repeated ObjectTrackingFrame frames = 2; -} - -// Annotation corresponding to one detected, tracked and recognized logo class. -message LogoRecognitionAnnotation { - // Entity category information to specify the logo class that all the logo - // tracks within this LogoRecognitionAnnotation are recognized as. - Entity entity = 1; - - // All logo tracks where the recognized logo appears. Each track corresponds - // to one logo instance appearing in consecutive frames. - repeated Track tracks = 2; - - // All video segments where the recognized logo appears. There might be - // multiple instances of the same logo class appearing in one VideoSegment. - repeated VideoSegment segments = 3; -} - -// The top-level message sent by the client for the `StreamingAnnotateVideo` -// method. Multiple `StreamingAnnotateVideoRequest` messages are sent. -// The first message must only contain a `StreamingVideoConfig` message. -// All subsequent messages must only contain `input_content` data. -message StreamingAnnotateVideoRequest { - // *Required* The streaming request, which is either a streaming config or - // video content. - oneof streaming_request { - // Provides information to the annotator, specifing how to process the - // request. The first `AnnotateStreamingVideoRequest` message must only - // contain a `video_config` message. - StreamingVideoConfig video_config = 1; - - // The video data to be annotated. Chunks of video data are sequentially - // sent in `StreamingAnnotateVideoRequest` messages. Except the initial - // `StreamingAnnotateVideoRequest` message containing only - // `video_config`, all subsequent `AnnotateStreamingVideoRequest` - // messages must only contain `input_content` field. - // Note: as with all bytes fields, protobuffers use a pure binary - // representation (not base64). - bytes input_content = 2; - } -} - -// Provides information to the annotator that specifies how to process the -// request. -message StreamingVideoConfig { - // Config for requested annotation feature. - oneof streaming_config { - // Config for STREAMING_SHOT_CHANGE_DETECTION. - StreamingShotChangeDetectionConfig shot_change_detection_config = 2; - - // Config for STREAMING_LABEL_DETECTION. - StreamingLabelDetectionConfig label_detection_config = 3; - - // Config for STREAMING_EXPLICIT_CONTENT_DETECTION. - StreamingExplicitContentDetectionConfig explicit_content_detection_config = - 4; - - // Config for STREAMING_OBJECT_TRACKING. - StreamingObjectTrackingConfig object_tracking_config = 5; - - // Config for STREAMING_AUTOML_ACTION_RECOGNITION. - StreamingAutomlActionRecognitionConfig automl_action_recognition_config = - 23; - - // Config for STREAMING_AUTOML_CLASSIFICATION. - StreamingAutomlClassificationConfig automl_classification_config = 21; - - // Config for STREAMING_AUTOML_OBJECT_TRACKING. - StreamingAutomlObjectTrackingConfig automl_object_tracking_config = 22; - } - - // Requested annotation feature. - StreamingFeature feature = 1; - - // Streaming storage option. By default: storage is disabled. - StreamingStorageConfig storage_config = 30; -} - -// `StreamingAnnotateVideoResponse` is the only message returned to the client -// by `StreamingAnnotateVideo`. A series of zero or more -// `StreamingAnnotateVideoResponse` messages are streamed back to the client. -message StreamingAnnotateVideoResponse { - // If set, returns a [google.rpc.Status][google.rpc.Status] message that - // specifies the error for the operation. - google.rpc.Status error = 1; - - // Streaming annotation results. - StreamingVideoAnnotationResults annotation_results = 2; - - // Google Cloud Storage(GCS) URI that stores annotation results of one - // streaming session in JSON format. - // It is the annotation_result_storage_directory - // from the request followed by '/cloud_project_number-session_id'. - string annotation_results_uri = 3; -} - -// Streaming annotation results corresponding to a portion of the video -// that is currently being processed. -message StreamingVideoAnnotationResults { - // Shot annotation results. Each shot is represented as a video segment. - repeated VideoSegment shot_annotations = 1; - - // Label annotation results. - repeated LabelAnnotation label_annotations = 2; - - // Explicit content annotation results. - ExplicitContentAnnotation explicit_annotation = 3; - - // Object tracking results. - repeated ObjectTrackingAnnotation object_annotations = 4; -} - -// Config for STREAMING_SHOT_CHANGE_DETECTION. -message StreamingShotChangeDetectionConfig {} - -// Config for STREAMING_LABEL_DETECTION. -message StreamingLabelDetectionConfig { - // Whether the video has been captured from a stationary (i.e. non-moving) - // camera. When set to true, might improve detection accuracy for moving - // objects. Default: false. - bool stationary_camera = 1; -} - -// Config for STREAMING_EXPLICIT_CONTENT_DETECTION. -message StreamingExplicitContentDetectionConfig {} - -// Config for STREAMING_OBJECT_TRACKING. -message StreamingObjectTrackingConfig {} - -// Config for STREAMING_AUTOML_ACTION_RECOGNITION. -message StreamingAutomlActionRecognitionConfig { - // Resource name of AutoML model. - // Format: `projects/{project_id}/locations/{location_id}/models/{model_id}` - string model_name = 1; -} - -// Config for STREAMING_AUTOML_CLASSIFICATION. -message StreamingAutomlClassificationConfig { - // Resource name of AutoML model. - // Format: - // `projects/{project_number}/locations/{location_id}/models/{model_id}` - string model_name = 1; -} - -// Config for STREAMING_AUTOML_OBJECT_TRACKING. -message StreamingAutomlObjectTrackingConfig { - // Resource name of AutoML model. - // Format: `projects/{project_id}/locations/{location_id}/models/{model_id}` - string model_name = 1; -} - -// Config for streaming storage option. -message StreamingStorageConfig { - // Enable streaming storage. Default: false. - bool enable_storage_annotation_result = 1; - - // Cloud Storage URI to store all annotation results for one client. Client - // should specify this field as the top-level storage directory. Annotation - // results of different sessions will be put into different sub-directories - // denoted by project_name and session_id. All sub-directories will be auto - // generated by program and will be made accessible to client in response - // proto. URIs must be specified in the following format: - // `gs://bucket-id/object-id` `bucket-id` should be a valid Cloud Storage - // bucket created by client and bucket permission shall also be configured - // properly. `object-id` can be arbitrary string that make sense to client. - // Other URI formats will return error and cause Cloud Storage write failure. - string annotation_result_storage_directory = 3; -} diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/proto/video_intelligence_pb2.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/proto/video_intelligence_pb2.py deleted file mode 100644 index 9d118459e293..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/proto/video_intelligence_pb2.py +++ /dev/null @@ -1,6711 +0,0 @@ -# -*- coding: utf-8 -*- -# Generated by the protocol buffer compiler. DO NOT EDIT! -# source: google/cloud/videointelligence_v1p3beta1/proto/video_intelligence.proto -"""Generated protocol buffer code.""" -from google.protobuf.internal import enum_type_wrapper -from google.protobuf import descriptor as _descriptor -from google.protobuf import message as _message -from google.protobuf import reflection as _reflection -from google.protobuf import symbol_database as _symbol_database - -# @@protoc_insertion_point(imports) - -_sym_db = _symbol_database.Default() - - -from google.api import annotations_pb2 as google_dot_api_dot_annotations__pb2 -from google.api import client_pb2 as google_dot_api_dot_client__pb2 -from google.api import field_behavior_pb2 as google_dot_api_dot_field__behavior__pb2 -from google.longrunning import ( - operations_pb2 as google_dot_longrunning_dot_operations__pb2, -) -from google.protobuf import duration_pb2 as google_dot_protobuf_dot_duration__pb2 -from google.protobuf import timestamp_pb2 as google_dot_protobuf_dot_timestamp__pb2 -from google.rpc import status_pb2 as google_dot_rpc_dot_status__pb2 - - -DESCRIPTOR = _descriptor.FileDescriptor( - name="google/cloud/videointelligence_v1p3beta1/proto/video_intelligence.proto", - package="google.cloud.videointelligence.v1p3beta1", - syntax="proto3", - serialized_options=b"\n,com.google.cloud.videointelligence.v1p3beta1B\035VideoIntelligenceServiceProtoP\001ZYgoogle.golang.org/genproto/googleapis/cloud/videointelligence/v1p3beta1;videointelligence\252\002(Google.Cloud.VideoIntelligence.V1P3Beta1\312\002(Google\\Cloud\\VideoIntelligence\\V1p3beta1", - create_key=_descriptor._internal_create_key, - serialized_pb=b'\nGgoogle/cloud/videointelligence_v1p3beta1/proto/video_intelligence.proto\x12(google.cloud.videointelligence.v1p3beta1\x1a\x1cgoogle/api/annotations.proto\x1a\x17google/api/client.proto\x1a\x1fgoogle/api/field_behavior.proto\x1a#google/longrunning/operations.proto\x1a\x1egoogle/protobuf/duration.proto\x1a\x1fgoogle/protobuf/timestamp.proto\x1a\x17google/rpc/status.proto"\x8c\x02\n\x14\x41nnotateVideoRequest\x12\x11\n\tinput_uri\x18\x01 \x01(\t\x12\x15\n\rinput_content\x18\x06 \x01(\x0c\x12H\n\x08\x66\x65\x61tures\x18\x02 \x03(\x0e\x32\x31.google.cloud.videointelligence.v1p3beta1.FeatureB\x03\xe0\x41\x02\x12M\n\rvideo_context\x18\x03 \x01(\x0b\x32\x36.google.cloud.videointelligence.v1p3beta1.VideoContext\x12\x17\n\noutput_uri\x18\x04 \x01(\tB\x03\xe0\x41\x01\x12\x18\n\x0blocation_id\x18\x05 \x01(\tB\x03\xe0\x41\x01"\x80\x07\n\x0cVideoContext\x12H\n\x08segments\x18\x01 \x03(\x0b\x32\x36.google.cloud.videointelligence.v1p3beta1.VideoSegment\x12^\n\x16label_detection_config\x18\x02 \x01(\x0b\x32>.google.cloud.videointelligence.v1p3beta1.LabelDetectionConfig\x12i\n\x1cshot_change_detection_config\x18\x03 \x01(\x0b\x32\x43.google.cloud.videointelligence.v1p3beta1.ShotChangeDetectionConfig\x12s\n!explicit_content_detection_config\x18\x04 \x01(\x0b\x32H.google.cloud.videointelligence.v1p3beta1.ExplicitContentDetectionConfig\x12\\\n\x15\x66\x61\x63\x65_detection_config\x18\x05 \x01(\x0b\x32=.google.cloud.videointelligence.v1p3beta1.FaceDetectionConfig\x12h\n\x1bspeech_transcription_config\x18\x06 \x01(\x0b\x32\x43.google.cloud.videointelligence.v1p3beta1.SpeechTranscriptionConfig\x12\\\n\x15text_detection_config\x18\x08 \x01(\x0b\x32=.google.cloud.videointelligence.v1p3beta1.TextDetectionConfig\x12`\n\x17person_detection_config\x18\x0b \x01(\x0b\x32?.google.cloud.videointelligence.v1p3beta1.PersonDetectionConfig\x12^\n\x16object_tracking_config\x18\r \x01(\x0b\x32>.google.cloud.videointelligence.v1p3beta1.ObjectTrackingConfig"\xe4\x01\n\x14LabelDetectionConfig\x12Z\n\x14label_detection_mode\x18\x01 \x01(\x0e\x32<.google.cloud.videointelligence.v1p3beta1.LabelDetectionMode\x12\x19\n\x11stationary_camera\x18\x02 \x01(\x08\x12\r\n\x05model\x18\x03 \x01(\t\x12"\n\x1a\x66rame_confidence_threshold\x18\x04 \x01(\x02\x12"\n\x1avideo_confidence_threshold\x18\x05 \x01(\x02"*\n\x19ShotChangeDetectionConfig\x12\r\n\x05model\x18\x01 \x01(\t"%\n\x14ObjectTrackingConfig\x12\r\n\x05model\x18\x01 \x01(\t"/\n\x1e\x45xplicitContentDetectionConfig\x12\r\n\x05model\x18\x01 \x01(\t"`\n\x13\x46\x61\x63\x65\x44\x65tectionConfig\x12\r\n\x05model\x18\x01 \x01(\t\x12\x1e\n\x16include_bounding_boxes\x18\x02 \x01(\x08\x12\x1a\n\x12include_attributes\x18\x05 \x01(\x08"s\n\x15PersonDetectionConfig\x12\x1e\n\x16include_bounding_boxes\x18\x01 \x01(\x08\x12\x1e\n\x16include_pose_landmarks\x18\x02 \x01(\x08\x12\x1a\n\x12include_attributes\x18\x03 \x01(\x08"<\n\x13TextDetectionConfig\x12\x16\n\x0elanguage_hints\x18\x01 \x03(\t\x12\r\n\x05model\x18\x02 \x01(\t"x\n\x0cVideoSegment\x12\x34\n\x11start_time_offset\x18\x01 \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x32\n\x0f\x65nd_time_offset\x18\x02 \x01(\x0b\x32\x19.google.protobuf.Duration"k\n\x0cLabelSegment\x12G\n\x07segment\x18\x01 \x01(\x0b\x32\x36.google.cloud.videointelligence.v1p3beta1.VideoSegment\x12\x12\n\nconfidence\x18\x02 \x01(\x02"P\n\nLabelFrame\x12.\n\x0btime_offset\x18\x01 \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x12\n\nconfidence\x18\x02 \x01(\x02"G\n\x06\x45ntity\x12\x11\n\tentity_id\x18\x01 \x01(\t\x12\x13\n\x0b\x64\x65scription\x18\x02 \x01(\t\x12\x15\n\rlanguage_code\x18\x03 \x01(\t"\xb0\x02\n\x0fLabelAnnotation\x12@\n\x06\x65ntity\x18\x01 \x01(\x0b\x32\x30.google.cloud.videointelligence.v1p3beta1.Entity\x12K\n\x11\x63\x61tegory_entities\x18\x02 \x03(\x0b\x32\x30.google.cloud.videointelligence.v1p3beta1.Entity\x12H\n\x08segments\x18\x03 \x03(\x0b\x32\x36.google.cloud.videointelligence.v1p3beta1.LabelSegment\x12\x44\n\x06\x66rames\x18\x04 \x03(\x0b\x32\x34.google.cloud.videointelligence.v1p3beta1.LabelFrame"\x9c\x01\n\x14\x45xplicitContentFrame\x12.\n\x0btime_offset\x18\x01 \x01(\x0b\x32\x19.google.protobuf.Duration\x12T\n\x16pornography_likelihood\x18\x02 \x01(\x0e\x32\x34.google.cloud.videointelligence.v1p3beta1.Likelihood"k\n\x19\x45xplicitContentAnnotation\x12N\n\x06\x66rames\x18\x01 \x03(\x0b\x32>.google.cloud.videointelligence.v1p3beta1.ExplicitContentFrame"Q\n\x15NormalizedBoundingBox\x12\x0c\n\x04left\x18\x01 \x01(\x02\x12\x0b\n\x03top\x18\x02 \x01(\x02\x12\r\n\x05right\x18\x03 \x01(\x02\x12\x0e\n\x06\x62ottom\x18\x04 \x01(\x02"\xcf\x02\n\x11TimestampedObject\x12`\n\x17normalized_bounding_box\x18\x01 \x01(\x0b\x32?.google.cloud.videointelligence.v1p3beta1.NormalizedBoundingBox\x12.\n\x0btime_offset\x18\x02 \x01(\x0b\x32\x19.google.protobuf.Duration\x12T\n\nattributes\x18\x03 \x03(\x0b\x32;.google.cloud.videointelligence.v1p3beta1.DetectedAttributeB\x03\xe0\x41\x01\x12R\n\tlandmarks\x18\x04 \x03(\x0b\x32:.google.cloud.videointelligence.v1p3beta1.DetectedLandmarkB\x03\xe0\x41\x01"\x99\x02\n\x05Track\x12G\n\x07segment\x18\x01 \x01(\x0b\x32\x36.google.cloud.videointelligence.v1p3beta1.VideoSegment\x12X\n\x13timestamped_objects\x18\x02 \x03(\x0b\x32;.google.cloud.videointelligence.v1p3beta1.TimestampedObject\x12T\n\nattributes\x18\x03 \x03(\x0b\x32;.google.cloud.videointelligence.v1p3beta1.DetectedAttributeB\x03\xe0\x41\x01\x12\x17\n\nconfidence\x18\x04 \x01(\x02\x42\x03\xe0\x41\x01"D\n\x11\x44\x65tectedAttribute\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x12\n\nconfidence\x18\x02 \x01(\x02\x12\r\n\x05value\x18\x03 \x01(\t"D\n\tCelebrity\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x14\n\x0c\x64isplay_name\x18\x02 \x01(\t\x12\x13\n\x0b\x64\x65scription\x18\x03 \x01(\t"\xab\x02\n\x0e\x43\x65lebrityTrack\x12\x61\n\x0b\x63\x65lebrities\x18\x01 \x03(\x0b\x32L.google.cloud.videointelligence.v1p3beta1.CelebrityTrack.RecognizedCelebrity\x12\x43\n\nface_track\x18\x03 \x01(\x0b\x32/.google.cloud.videointelligence.v1p3beta1.Track\x1aq\n\x13RecognizedCelebrity\x12\x46\n\tcelebrity\x18\x01 \x01(\x0b\x32\x33.google.cloud.videointelligence.v1p3beta1.Celebrity\x12\x12\n\nconfidence\x18\x02 \x01(\x02"t\n\x1e\x43\x65lebrityRecognitionAnnotation\x12R\n\x10\x63\x65lebrity_tracks\x18\x01 \x03(\x0b\x32\x38.google.cloud.videointelligence.v1p3beta1.CelebrityTrack"\x7f\n\x10\x44\x65tectedLandmark\x12\x0c\n\x04name\x18\x01 \x01(\t\x12I\n\x05point\x18\x02 \x01(\x0b\x32:.google.cloud.videointelligence.v1p3beta1.NormalizedVertex\x12\x12\n\nconfidence\x18\x03 \x01(\x02"m\n\x17\x46\x61\x63\x65\x44\x65tectionAnnotation\x12?\n\x06tracks\x18\x03 \x03(\x0b\x32/.google.cloud.videointelligence.v1p3beta1.Track\x12\x11\n\tthumbnail\x18\x04 \x01(\x0c"\\\n\x19PersonDetectionAnnotation\x12?\n\x06tracks\x18\x01 \x03(\x0b\x32/.google.cloud.videointelligence.v1p3beta1.Track"\xef\x0b\n\x16VideoAnnotationResults\x12\x11\n\tinput_uri\x18\x01 \x01(\t\x12G\n\x07segment\x18\n \x01(\x0b\x32\x36.google.cloud.videointelligence.v1p3beta1.VideoSegment\x12\\\n\x19segment_label_annotations\x18\x02 \x03(\x0b\x32\x39.google.cloud.videointelligence.v1p3beta1.LabelAnnotation\x12\x65\n"segment_presence_label_annotations\x18\x17 \x03(\x0b\x32\x39.google.cloud.videointelligence.v1p3beta1.LabelAnnotation\x12Y\n\x16shot_label_annotations\x18\x03 \x03(\x0b\x32\x39.google.cloud.videointelligence.v1p3beta1.LabelAnnotation\x12\x62\n\x1fshot_presence_label_annotations\x18\x18 \x03(\x0b\x32\x39.google.cloud.videointelligence.v1p3beta1.LabelAnnotation\x12Z\n\x17\x66rame_label_annotations\x18\x04 \x03(\x0b\x32\x39.google.cloud.videointelligence.v1p3beta1.LabelAnnotation\x12\x65\n\x1a\x66\x61\x63\x65_detection_annotations\x18\r \x03(\x0b\x32\x41.google.cloud.videointelligence.v1p3beta1.FaceDetectionAnnotation\x12P\n\x10shot_annotations\x18\x06 \x03(\x0b\x32\x36.google.cloud.videointelligence.v1p3beta1.VideoSegment\x12`\n\x13\x65xplicit_annotation\x18\x07 \x01(\x0b\x32\x43.google.cloud.videointelligence.v1p3beta1.ExplicitContentAnnotation\x12\\\n\x15speech_transcriptions\x18\x0b \x03(\x0b\x32=.google.cloud.videointelligence.v1p3beta1.SpeechTranscription\x12R\n\x10text_annotations\x18\x0c \x03(\x0b\x32\x38.google.cloud.videointelligence.v1p3beta1.TextAnnotation\x12^\n\x12object_annotations\x18\x0e \x03(\x0b\x32\x42.google.cloud.videointelligence.v1p3beta1.ObjectTrackingAnnotation\x12i\n\x1clogo_recognition_annotations\x18\x13 \x03(\x0b\x32\x43.google.cloud.videointelligence.v1p3beta1.LogoRecognitionAnnotation\x12i\n\x1cperson_detection_annotations\x18\x14 \x03(\x0b\x32\x43.google.cloud.videointelligence.v1p3beta1.PersonDetectionAnnotation\x12s\n!celebrity_recognition_annotations\x18\x15 \x01(\x0b\x32H.google.cloud.videointelligence.v1p3beta1.CelebrityRecognitionAnnotation\x12!\n\x05\x65rror\x18\t \x01(\x0b\x32\x12.google.rpc.Status"u\n\x15\x41nnotateVideoResponse\x12\\\n\x12\x61nnotation_results\x18\x01 \x03(\x0b\x32@.google.cloud.videointelligence.v1p3beta1.VideoAnnotationResults"\xb4\x02\n\x17VideoAnnotationProgress\x12\x11\n\tinput_uri\x18\x01 \x01(\t\x12\x18\n\x10progress_percent\x18\x02 \x01(\x05\x12.\n\nstart_time\x18\x03 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12/\n\x0bupdate_time\x18\x04 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\x12\x42\n\x07\x66\x65\x61ture\x18\x05 \x01(\x0e\x32\x31.google.cloud.videointelligence.v1p3beta1.Feature\x12G\n\x07segment\x18\x06 \x01(\x0b\x32\x36.google.cloud.videointelligence.v1p3beta1.VideoSegment"w\n\x15\x41nnotateVideoProgress\x12^\n\x13\x61nnotation_progress\x18\x01 \x03(\x0b\x32\x41.google.cloud.videointelligence.v1p3beta1.VideoAnnotationProgress"\x88\x03\n\x19SpeechTranscriptionConfig\x12\x1a\n\rlanguage_code\x18\x01 \x01(\tB\x03\xe0\x41\x02\x12\x1d\n\x10max_alternatives\x18\x02 \x01(\x05\x42\x03\xe0\x41\x01\x12\x1d\n\x10\x66ilter_profanity\x18\x03 \x01(\x08\x42\x03\xe0\x41\x01\x12U\n\x0fspeech_contexts\x18\x04 \x03(\x0b\x32\x37.google.cloud.videointelligence.v1p3beta1.SpeechContextB\x03\xe0\x41\x01\x12)\n\x1c\x65nable_automatic_punctuation\x18\x05 \x01(\x08\x42\x03\xe0\x41\x01\x12\x19\n\x0c\x61udio_tracks\x18\x06 \x03(\x05\x42\x03\xe0\x41\x01\x12\'\n\x1a\x65nable_speaker_diarization\x18\x07 \x01(\x08\x42\x03\xe0\x41\x01\x12&\n\x19\x64iarization_speaker_count\x18\x08 \x01(\x05\x42\x03\xe0\x41\x01\x12#\n\x16\x65nable_word_confidence\x18\t \x01(\x08\x42\x03\xe0\x41\x01"%\n\rSpeechContext\x12\x14\n\x07phrases\x18\x01 \x03(\tB\x03\xe0\x41\x01"\x8f\x01\n\x13SpeechTranscription\x12\\\n\x0c\x61lternatives\x18\x01 \x03(\x0b\x32\x46.google.cloud.videointelligence.v1p3beta1.SpeechRecognitionAlternative\x12\x1a\n\rlanguage_code\x18\x02 \x01(\tB\x03\xe0\x41\x03"\x93\x01\n\x1cSpeechRecognitionAlternative\x12\x12\n\ntranscript\x18\x01 \x01(\t\x12\x17\n\nconfidence\x18\x02 \x01(\x02\x42\x03\xe0\x41\x03\x12\x46\n\x05words\x18\x03 \x03(\x0b\x32\x32.google.cloud.videointelligence.v1p3beta1.WordInfoB\x03\xe0\x41\x03"\xa7\x01\n\x08WordInfo\x12-\n\nstart_time\x18\x01 \x01(\x0b\x32\x19.google.protobuf.Duration\x12+\n\x08\x65nd_time\x18\x02 \x01(\x0b\x32\x19.google.protobuf.Duration\x12\x0c\n\x04word\x18\x03 \x01(\t\x12\x17\n\nconfidence\x18\x04 \x01(\x02\x42\x03\xe0\x41\x03\x12\x18\n\x0bspeaker_tag\x18\x05 \x01(\x05\x42\x03\xe0\x41\x03"(\n\x10NormalizedVertex\x12\t\n\x01x\x18\x01 \x01(\x02\x12\t\n\x01y\x18\x02 \x01(\x02"f\n\x16NormalizedBoundingPoly\x12L\n\x08vertices\x18\x01 \x03(\x0b\x32:.google.cloud.videointelligence.v1p3beta1.NormalizedVertex"\xaf\x01\n\x0bTextSegment\x12G\n\x07segment\x18\x01 \x01(\x0b\x32\x36.google.cloud.videointelligence.v1p3beta1.VideoSegment\x12\x12\n\nconfidence\x18\x02 \x01(\x02\x12\x43\n\x06\x66rames\x18\x03 \x03(\x0b\x32\x33.google.cloud.videointelligence.v1p3beta1.TextFrame"\x9b\x01\n\tTextFrame\x12^\n\x14rotated_bounding_box\x18\x01 \x01(\x0b\x32@.google.cloud.videointelligence.v1p3beta1.NormalizedBoundingPoly\x12.\n\x0btime_offset\x18\x02 \x01(\x0b\x32\x19.google.protobuf.Duration"g\n\x0eTextAnnotation\x12\x0c\n\x04text\x18\x01 \x01(\t\x12G\n\x08segments\x18\x02 \x03(\x0b\x32\x35.google.cloud.videointelligence.v1p3beta1.TextSegment"\xa7\x01\n\x13ObjectTrackingFrame\x12`\n\x17normalized_bounding_box\x18\x01 \x01(\x0b\x32?.google.cloud.videointelligence.v1p3beta1.NormalizedBoundingBox\x12.\n\x0btime_offset\x18\x02 \x01(\x0b\x32\x19.google.protobuf.Duration"\xac\x02\n\x18ObjectTrackingAnnotation\x12I\n\x07segment\x18\x03 \x01(\x0b\x32\x36.google.cloud.videointelligence.v1p3beta1.VideoSegmentH\x00\x12\x12\n\x08track_id\x18\x05 \x01(\x03H\x00\x12@\n\x06\x65ntity\x18\x01 \x01(\x0b\x32\x30.google.cloud.videointelligence.v1p3beta1.Entity\x12\x12\n\nconfidence\x18\x04 \x01(\x02\x12M\n\x06\x66rames\x18\x02 \x03(\x0b\x32=.google.cloud.videointelligence.v1p3beta1.ObjectTrackingFrameB\x0c\n\ntrack_info"\xe8\x01\n\x19LogoRecognitionAnnotation\x12@\n\x06\x65ntity\x18\x01 \x01(\x0b\x32\x30.google.cloud.videointelligence.v1p3beta1.Entity\x12?\n\x06tracks\x18\x02 \x03(\x0b\x32/.google.cloud.videointelligence.v1p3beta1.Track\x12H\n\x08segments\x18\x03 \x03(\x0b\x32\x36.google.cloud.videointelligence.v1p3beta1.VideoSegment"\xa5\x01\n\x1dStreamingAnnotateVideoRequest\x12V\n\x0cvideo_config\x18\x01 \x01(\x0b\x32>.google.cloud.videointelligence.v1p3beta1.StreamingVideoConfigH\x00\x12\x17\n\rinput_content\x18\x02 \x01(\x0cH\x00\x42\x13\n\x11streaming_request"\x8a\x08\n\x14StreamingVideoConfig\x12t\n\x1cshot_change_detection_config\x18\x02 \x01(\x0b\x32L.google.cloud.videointelligence.v1p3beta1.StreamingShotChangeDetectionConfigH\x00\x12i\n\x16label_detection_config\x18\x03 \x01(\x0b\x32G.google.cloud.videointelligence.v1p3beta1.StreamingLabelDetectionConfigH\x00\x12~\n!explicit_content_detection_config\x18\x04 \x01(\x0b\x32Q.google.cloud.videointelligence.v1p3beta1.StreamingExplicitContentDetectionConfigH\x00\x12i\n\x16object_tracking_config\x18\x05 \x01(\x0b\x32G.google.cloud.videointelligence.v1p3beta1.StreamingObjectTrackingConfigH\x00\x12|\n automl_action_recognition_config\x18\x17 \x01(\x0b\x32P.google.cloud.videointelligence.v1p3beta1.StreamingAutomlActionRecognitionConfigH\x00\x12u\n\x1c\x61utoml_classification_config\x18\x15 \x01(\x0b\x32M.google.cloud.videointelligence.v1p3beta1.StreamingAutomlClassificationConfigH\x00\x12v\n\x1d\x61utoml_object_tracking_config\x18\x16 \x01(\x0b\x32M.google.cloud.videointelligence.v1p3beta1.StreamingAutomlObjectTrackingConfigH\x00\x12K\n\x07\x66\x65\x61ture\x18\x01 \x01(\x0e\x32:.google.cloud.videointelligence.v1p3beta1.StreamingFeature\x12X\n\x0estorage_config\x18\x1e \x01(\x0b\x32@.google.cloud.videointelligence.v1p3beta1.StreamingStorageConfigB\x12\n\x10streaming_config"\xca\x01\n\x1eStreamingAnnotateVideoResponse\x12!\n\x05\x65rror\x18\x01 \x01(\x0b\x32\x12.google.rpc.Status\x12\x65\n\x12\x61nnotation_results\x18\x02 \x01(\x0b\x32I.google.cloud.videointelligence.v1p3beta1.StreamingVideoAnnotationResults\x12\x1e\n\x16\x61nnotation_results_uri\x18\x03 \x01(\t"\x8b\x03\n\x1fStreamingVideoAnnotationResults\x12P\n\x10shot_annotations\x18\x01 \x03(\x0b\x32\x36.google.cloud.videointelligence.v1p3beta1.VideoSegment\x12T\n\x11label_annotations\x18\x02 \x03(\x0b\x32\x39.google.cloud.videointelligence.v1p3beta1.LabelAnnotation\x12`\n\x13\x65xplicit_annotation\x18\x03 \x01(\x0b\x32\x43.google.cloud.videointelligence.v1p3beta1.ExplicitContentAnnotation\x12^\n\x12object_annotations\x18\x04 \x03(\x0b\x32\x42.google.cloud.videointelligence.v1p3beta1.ObjectTrackingAnnotation"$\n"StreamingShotChangeDetectionConfig":\n\x1dStreamingLabelDetectionConfig\x12\x19\n\x11stationary_camera\x18\x01 \x01(\x08")\n\'StreamingExplicitContentDetectionConfig"\x1f\n\x1dStreamingObjectTrackingConfig"<\n&StreamingAutomlActionRecognitionConfig\x12\x12\n\nmodel_name\x18\x01 \x01(\t"9\n#StreamingAutomlClassificationConfig\x12\x12\n\nmodel_name\x18\x01 \x01(\t"9\n#StreamingAutomlObjectTrackingConfig\x12\x12\n\nmodel_name\x18\x01 \x01(\t"o\n\x16StreamingStorageConfig\x12(\n enable_storage_annotation_result\x18\x01 \x01(\x08\x12+\n#annotation_result_storage_directory\x18\x03 \x01(\t*r\n\x12LabelDetectionMode\x12$\n LABEL_DETECTION_MODE_UNSPECIFIED\x10\x00\x12\r\n\tSHOT_MODE\x10\x01\x12\x0e\n\nFRAME_MODE\x10\x02\x12\x17\n\x13SHOT_AND_FRAME_MODE\x10\x03*t\n\nLikelihood\x12\x1a\n\x16LIKELIHOOD_UNSPECIFIED\x10\x00\x12\x11\n\rVERY_UNLIKELY\x10\x01\x12\x0c\n\x08UNLIKELY\x10\x02\x12\x0c\n\x08POSSIBLE\x10\x03\x12\n\n\x06LIKELY\x10\x04\x12\x0f\n\x0bVERY_LIKELY\x10\x05*\xb6\x02\n\x10StreamingFeature\x12!\n\x1dSTREAMING_FEATURE_UNSPECIFIED\x10\x00\x12\x1d\n\x19STREAMING_LABEL_DETECTION\x10\x01\x12#\n\x1fSTREAMING_SHOT_CHANGE_DETECTION\x10\x02\x12(\n$STREAMING_EXPLICIT_CONTENT_DETECTION\x10\x03\x12\x1d\n\x19STREAMING_OBJECT_TRACKING\x10\x04\x12\'\n#STREAMING_AUTOML_ACTION_RECOGNITION\x10\x17\x12#\n\x1fSTREAMING_AUTOML_CLASSIFICATION\x10\x15\x12$\n STREAMING_AUTOML_OBJECT_TRACKING\x10\x16*\x90\x02\n\x07\x46\x65\x61ture\x12\x17\n\x13\x46\x45\x41TURE_UNSPECIFIED\x10\x00\x12\x13\n\x0fLABEL_DETECTION\x10\x01\x12\x19\n\x15SHOT_CHANGE_DETECTION\x10\x02\x12\x1e\n\x1a\x45XPLICIT_CONTENT_DETECTION\x10\x03\x12\x12\n\x0e\x46\x41\x43\x45_DETECTION\x10\x04\x12\x18\n\x14SPEECH_TRANSCRIPTION\x10\x06\x12\x12\n\x0eTEXT_DETECTION\x10\x07\x12\x13\n\x0fOBJECT_TRACKING\x10\t\x12\x14\n\x10LOGO_RECOGNITION\x10\x0c\x12\x19\n\x15\x43\x45LEBRITY_RECOGNITION\x10\r\x12\x14\n\x10PERSON_DETECTION\x10\x0e\x32\xce\x02\n\x18VideoIntelligenceService\x12\xdb\x01\n\rAnnotateVideo\x12>.google.cloud.videointelligence.v1p3beta1.AnnotateVideoRequest\x1a\x1d.google.longrunning.Operation"k\x82\xd3\xe4\x93\x02\x1f"\x1a/v1p3beta1/videos:annotate:\x01*\xda\x41\x12input_uri,features\xca\x41.\n\x15\x41nnotateVideoResponse\x12\x15\x41nnotateVideoProgress\x1aT\xca\x41 videointelligence.googleapis.com\xd2\x41.https://www.googleapis.com/auth/cloud-platform2\xad\x02\n!StreamingVideoIntelligenceService\x12\xb1\x01\n\x16StreamingAnnotateVideo\x12G.google.cloud.videointelligence.v1p3beta1.StreamingAnnotateVideoRequest\x1aH.google.cloud.videointelligence.v1p3beta1.StreamingAnnotateVideoResponse"\x00(\x01\x30\x01\x1aT\xca\x41 videointelligence.googleapis.com\xd2\x41.https://www.googleapis.com/auth/cloud-platformB\x80\x02\n,com.google.cloud.videointelligence.v1p3beta1B\x1dVideoIntelligenceServiceProtoP\x01ZYgoogle.golang.org/genproto/googleapis/cloud/videointelligence/v1p3beta1;videointelligence\xaa\x02(Google.Cloud.VideoIntelligence.V1P3Beta1\xca\x02(Google\\Cloud\\VideoIntelligence\\V1p3beta1b\x06proto3', - dependencies=[ - google_dot_api_dot_annotations__pb2.DESCRIPTOR, - google_dot_api_dot_client__pb2.DESCRIPTOR, - google_dot_api_dot_field__behavior__pb2.DESCRIPTOR, - google_dot_longrunning_dot_operations__pb2.DESCRIPTOR, - google_dot_protobuf_dot_duration__pb2.DESCRIPTOR, - google_dot_protobuf_dot_timestamp__pb2.DESCRIPTOR, - google_dot_rpc_dot_status__pb2.DESCRIPTOR, - ], -) - -_LABELDETECTIONMODE = _descriptor.EnumDescriptor( - name="LabelDetectionMode", - full_name="google.cloud.videointelligence.v1p3beta1.LabelDetectionMode", - filename=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - values=[ - _descriptor.EnumValueDescriptor( - name="LABEL_DETECTION_MODE_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="SHOT_MODE", - index=1, - number=1, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="FRAME_MODE", - index=2, - number=2, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="SHOT_AND_FRAME_MODE", - index=3, - number=3, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=11245, - serialized_end=11359, -) -_sym_db.RegisterEnumDescriptor(_LABELDETECTIONMODE) - -LabelDetectionMode = enum_type_wrapper.EnumTypeWrapper(_LABELDETECTIONMODE) -_LIKELIHOOD = _descriptor.EnumDescriptor( - name="Likelihood", - full_name="google.cloud.videointelligence.v1p3beta1.Likelihood", - filename=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - values=[ - _descriptor.EnumValueDescriptor( - name="LIKELIHOOD_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="VERY_UNLIKELY", - index=1, - number=1, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="UNLIKELY", - index=2, - number=2, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="POSSIBLE", - index=3, - number=3, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="LIKELY", - index=4, - number=4, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="VERY_LIKELY", - index=5, - number=5, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=11361, - serialized_end=11477, -) -_sym_db.RegisterEnumDescriptor(_LIKELIHOOD) - -Likelihood = enum_type_wrapper.EnumTypeWrapper(_LIKELIHOOD) -_STREAMINGFEATURE = _descriptor.EnumDescriptor( - name="StreamingFeature", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingFeature", - filename=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - values=[ - _descriptor.EnumValueDescriptor( - name="STREAMING_FEATURE_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="STREAMING_LABEL_DETECTION", - index=1, - number=1, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="STREAMING_SHOT_CHANGE_DETECTION", - index=2, - number=2, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="STREAMING_EXPLICIT_CONTENT_DETECTION", - index=3, - number=3, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="STREAMING_OBJECT_TRACKING", - index=4, - number=4, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="STREAMING_AUTOML_ACTION_RECOGNITION", - index=5, - number=23, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="STREAMING_AUTOML_CLASSIFICATION", - index=6, - number=21, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="STREAMING_AUTOML_OBJECT_TRACKING", - index=7, - number=22, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=11480, - serialized_end=11790, -) -_sym_db.RegisterEnumDescriptor(_STREAMINGFEATURE) - -StreamingFeature = enum_type_wrapper.EnumTypeWrapper(_STREAMINGFEATURE) -_FEATURE = _descriptor.EnumDescriptor( - name="Feature", - full_name="google.cloud.videointelligence.v1p3beta1.Feature", - filename=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - values=[ - _descriptor.EnumValueDescriptor( - name="FEATURE_UNSPECIFIED", - index=0, - number=0, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="LABEL_DETECTION", - index=1, - number=1, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="SHOT_CHANGE_DETECTION", - index=2, - number=2, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="EXPLICIT_CONTENT_DETECTION", - index=3, - number=3, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="FACE_DETECTION", - index=4, - number=4, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="SPEECH_TRANSCRIPTION", - index=5, - number=6, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="TEXT_DETECTION", - index=6, - number=7, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="OBJECT_TRACKING", - index=7, - number=9, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="LOGO_RECOGNITION", - index=8, - number=12, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="CELEBRITY_RECOGNITION", - index=9, - number=13, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - _descriptor.EnumValueDescriptor( - name="PERSON_DETECTION", - index=10, - number=14, - serialized_options=None, - type=None, - create_key=_descriptor._internal_create_key, - ), - ], - containing_type=None, - serialized_options=None, - serialized_start=11793, - serialized_end=12065, -) -_sym_db.RegisterEnumDescriptor(_FEATURE) - -Feature = enum_type_wrapper.EnumTypeWrapper(_FEATURE) -LABEL_DETECTION_MODE_UNSPECIFIED = 0 -SHOT_MODE = 1 -FRAME_MODE = 2 -SHOT_AND_FRAME_MODE = 3 -LIKELIHOOD_UNSPECIFIED = 0 -VERY_UNLIKELY = 1 -UNLIKELY = 2 -POSSIBLE = 3 -LIKELY = 4 -VERY_LIKELY = 5 -STREAMING_FEATURE_UNSPECIFIED = 0 -STREAMING_LABEL_DETECTION = 1 -STREAMING_SHOT_CHANGE_DETECTION = 2 -STREAMING_EXPLICIT_CONTENT_DETECTION = 3 -STREAMING_OBJECT_TRACKING = 4 -STREAMING_AUTOML_ACTION_RECOGNITION = 23 -STREAMING_AUTOML_CLASSIFICATION = 21 -STREAMING_AUTOML_OBJECT_TRACKING = 22 -FEATURE_UNSPECIFIED = 0 -LABEL_DETECTION = 1 -SHOT_CHANGE_DETECTION = 2 -EXPLICIT_CONTENT_DETECTION = 3 -FACE_DETECTION = 4 -SPEECH_TRANSCRIPTION = 6 -TEXT_DETECTION = 7 -OBJECT_TRACKING = 9 -LOGO_RECOGNITION = 12 -CELEBRITY_RECOGNITION = 13 -PERSON_DETECTION = 14 - - -_ANNOTATEVIDEOREQUEST = _descriptor.Descriptor( - name="AnnotateVideoRequest", - full_name="google.cloud.videointelligence.v1p3beta1.AnnotateVideoRequest", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="input_uri", - full_name="google.cloud.videointelligence.v1p3beta1.AnnotateVideoRequest.input_uri", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="input_content", - full_name="google.cloud.videointelligence.v1p3beta1.AnnotateVideoRequest.input_content", - index=1, - number=6, - type=12, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"", - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="features", - full_name="google.cloud.videointelligence.v1p3beta1.AnnotateVideoRequest.features", - index=2, - number=2, - type=14, - cpp_type=8, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\002", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="video_context", - full_name="google.cloud.videointelligence.v1p3beta1.AnnotateVideoRequest.video_context", - index=3, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="output_uri", - full_name="google.cloud.videointelligence.v1p3beta1.AnnotateVideoRequest.output_uri", - index=4, - number=4, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="location_id", - full_name="google.cloud.videointelligence.v1p3beta1.AnnotateVideoRequest.location_id", - index=5, - number=5, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=333, - serialized_end=601, -) - - -_VIDEOCONTEXT = _descriptor.Descriptor( - name="VideoContext", - full_name="google.cloud.videointelligence.v1p3beta1.VideoContext", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="segments", - full_name="google.cloud.videointelligence.v1p3beta1.VideoContext.segments", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="label_detection_config", - full_name="google.cloud.videointelligence.v1p3beta1.VideoContext.label_detection_config", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="shot_change_detection_config", - full_name="google.cloud.videointelligence.v1p3beta1.VideoContext.shot_change_detection_config", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="explicit_content_detection_config", - full_name="google.cloud.videointelligence.v1p3beta1.VideoContext.explicit_content_detection_config", - index=3, - number=4, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="face_detection_config", - full_name="google.cloud.videointelligence.v1p3beta1.VideoContext.face_detection_config", - index=4, - number=5, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="speech_transcription_config", - full_name="google.cloud.videointelligence.v1p3beta1.VideoContext.speech_transcription_config", - index=5, - number=6, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="text_detection_config", - full_name="google.cloud.videointelligence.v1p3beta1.VideoContext.text_detection_config", - index=6, - number=8, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="person_detection_config", - full_name="google.cloud.videointelligence.v1p3beta1.VideoContext.person_detection_config", - index=7, - number=11, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="object_tracking_config", - full_name="google.cloud.videointelligence.v1p3beta1.VideoContext.object_tracking_config", - index=8, - number=13, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=604, - serialized_end=1500, -) - - -_LABELDETECTIONCONFIG = _descriptor.Descriptor( - name="LabelDetectionConfig", - full_name="google.cloud.videointelligence.v1p3beta1.LabelDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="label_detection_mode", - full_name="google.cloud.videointelligence.v1p3beta1.LabelDetectionConfig.label_detection_mode", - index=0, - number=1, - type=14, - cpp_type=8, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="stationary_camera", - full_name="google.cloud.videointelligence.v1p3beta1.LabelDetectionConfig.stationary_camera", - index=1, - number=2, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="model", - full_name="google.cloud.videointelligence.v1p3beta1.LabelDetectionConfig.model", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="frame_confidence_threshold", - full_name="google.cloud.videointelligence.v1p3beta1.LabelDetectionConfig.frame_confidence_threshold", - index=3, - number=4, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="video_confidence_threshold", - full_name="google.cloud.videointelligence.v1p3beta1.LabelDetectionConfig.video_confidence_threshold", - index=4, - number=5, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1503, - serialized_end=1731, -) - - -_SHOTCHANGEDETECTIONCONFIG = _descriptor.Descriptor( - name="ShotChangeDetectionConfig", - full_name="google.cloud.videointelligence.v1p3beta1.ShotChangeDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="model", - full_name="google.cloud.videointelligence.v1p3beta1.ShotChangeDetectionConfig.model", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1733, - serialized_end=1775, -) - - -_OBJECTTRACKINGCONFIG = _descriptor.Descriptor( - name="ObjectTrackingConfig", - full_name="google.cloud.videointelligence.v1p3beta1.ObjectTrackingConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="model", - full_name="google.cloud.videointelligence.v1p3beta1.ObjectTrackingConfig.model", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1777, - serialized_end=1814, -) - - -_EXPLICITCONTENTDETECTIONCONFIG = _descriptor.Descriptor( - name="ExplicitContentDetectionConfig", - full_name="google.cloud.videointelligence.v1p3beta1.ExplicitContentDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="model", - full_name="google.cloud.videointelligence.v1p3beta1.ExplicitContentDetectionConfig.model", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1816, - serialized_end=1863, -) - - -_FACEDETECTIONCONFIG = _descriptor.Descriptor( - name="FaceDetectionConfig", - full_name="google.cloud.videointelligence.v1p3beta1.FaceDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="model", - full_name="google.cloud.videointelligence.v1p3beta1.FaceDetectionConfig.model", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="include_bounding_boxes", - full_name="google.cloud.videointelligence.v1p3beta1.FaceDetectionConfig.include_bounding_boxes", - index=1, - number=2, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="include_attributes", - full_name="google.cloud.videointelligence.v1p3beta1.FaceDetectionConfig.include_attributes", - index=2, - number=5, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1865, - serialized_end=1961, -) - - -_PERSONDETECTIONCONFIG = _descriptor.Descriptor( - name="PersonDetectionConfig", - full_name="google.cloud.videointelligence.v1p3beta1.PersonDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="include_bounding_boxes", - full_name="google.cloud.videointelligence.v1p3beta1.PersonDetectionConfig.include_bounding_boxes", - index=0, - number=1, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="include_pose_landmarks", - full_name="google.cloud.videointelligence.v1p3beta1.PersonDetectionConfig.include_pose_landmarks", - index=1, - number=2, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="include_attributes", - full_name="google.cloud.videointelligence.v1p3beta1.PersonDetectionConfig.include_attributes", - index=2, - number=3, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=1963, - serialized_end=2078, -) - - -_TEXTDETECTIONCONFIG = _descriptor.Descriptor( - name="TextDetectionConfig", - full_name="google.cloud.videointelligence.v1p3beta1.TextDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="language_hints", - full_name="google.cloud.videointelligence.v1p3beta1.TextDetectionConfig.language_hints", - index=0, - number=1, - type=9, - cpp_type=9, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="model", - full_name="google.cloud.videointelligence.v1p3beta1.TextDetectionConfig.model", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2080, - serialized_end=2140, -) - - -_VIDEOSEGMENT = _descriptor.Descriptor( - name="VideoSegment", - full_name="google.cloud.videointelligence.v1p3beta1.VideoSegment", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="start_time_offset", - full_name="google.cloud.videointelligence.v1p3beta1.VideoSegment.start_time_offset", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="end_time_offset", - full_name="google.cloud.videointelligence.v1p3beta1.VideoSegment.end_time_offset", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2142, - serialized_end=2262, -) - - -_LABELSEGMENT = _descriptor.Descriptor( - name="LabelSegment", - full_name="google.cloud.videointelligence.v1p3beta1.LabelSegment", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="segment", - full_name="google.cloud.videointelligence.v1p3beta1.LabelSegment.segment", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1p3beta1.LabelSegment.confidence", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2264, - serialized_end=2371, -) - - -_LABELFRAME = _descriptor.Descriptor( - name="LabelFrame", - full_name="google.cloud.videointelligence.v1p3beta1.LabelFrame", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="time_offset", - full_name="google.cloud.videointelligence.v1p3beta1.LabelFrame.time_offset", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1p3beta1.LabelFrame.confidence", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2373, - serialized_end=2453, -) - - -_ENTITY = _descriptor.Descriptor( - name="Entity", - full_name="google.cloud.videointelligence.v1p3beta1.Entity", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="entity_id", - full_name="google.cloud.videointelligence.v1p3beta1.Entity.entity_id", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="description", - full_name="google.cloud.videointelligence.v1p3beta1.Entity.description", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="language_code", - full_name="google.cloud.videointelligence.v1p3beta1.Entity.language_code", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2455, - serialized_end=2526, -) - - -_LABELANNOTATION = _descriptor.Descriptor( - name="LabelAnnotation", - full_name="google.cloud.videointelligence.v1p3beta1.LabelAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="entity", - full_name="google.cloud.videointelligence.v1p3beta1.LabelAnnotation.entity", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="category_entities", - full_name="google.cloud.videointelligence.v1p3beta1.LabelAnnotation.category_entities", - index=1, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segments", - full_name="google.cloud.videointelligence.v1p3beta1.LabelAnnotation.segments", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="frames", - full_name="google.cloud.videointelligence.v1p3beta1.LabelAnnotation.frames", - index=3, - number=4, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2529, - serialized_end=2833, -) - - -_EXPLICITCONTENTFRAME = _descriptor.Descriptor( - name="ExplicitContentFrame", - full_name="google.cloud.videointelligence.v1p3beta1.ExplicitContentFrame", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="time_offset", - full_name="google.cloud.videointelligence.v1p3beta1.ExplicitContentFrame.time_offset", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="pornography_likelihood", - full_name="google.cloud.videointelligence.v1p3beta1.ExplicitContentFrame.pornography_likelihood", - index=1, - number=2, - type=14, - cpp_type=8, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2836, - serialized_end=2992, -) - - -_EXPLICITCONTENTANNOTATION = _descriptor.Descriptor( - name="ExplicitContentAnnotation", - full_name="google.cloud.videointelligence.v1p3beta1.ExplicitContentAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="frames", - full_name="google.cloud.videointelligence.v1p3beta1.ExplicitContentAnnotation.frames", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=2994, - serialized_end=3101, -) - - -_NORMALIZEDBOUNDINGBOX = _descriptor.Descriptor( - name="NormalizedBoundingBox", - full_name="google.cloud.videointelligence.v1p3beta1.NormalizedBoundingBox", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="left", - full_name="google.cloud.videointelligence.v1p3beta1.NormalizedBoundingBox.left", - index=0, - number=1, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="top", - full_name="google.cloud.videointelligence.v1p3beta1.NormalizedBoundingBox.top", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="right", - full_name="google.cloud.videointelligence.v1p3beta1.NormalizedBoundingBox.right", - index=2, - number=3, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="bottom", - full_name="google.cloud.videointelligence.v1p3beta1.NormalizedBoundingBox.bottom", - index=3, - number=4, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3103, - serialized_end=3184, -) - - -_TIMESTAMPEDOBJECT = _descriptor.Descriptor( - name="TimestampedObject", - full_name="google.cloud.videointelligence.v1p3beta1.TimestampedObject", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="normalized_bounding_box", - full_name="google.cloud.videointelligence.v1p3beta1.TimestampedObject.normalized_bounding_box", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="time_offset", - full_name="google.cloud.videointelligence.v1p3beta1.TimestampedObject.time_offset", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="attributes", - full_name="google.cloud.videointelligence.v1p3beta1.TimestampedObject.attributes", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="landmarks", - full_name="google.cloud.videointelligence.v1p3beta1.TimestampedObject.landmarks", - index=3, - number=4, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3187, - serialized_end=3522, -) - - -_TRACK = _descriptor.Descriptor( - name="Track", - full_name="google.cloud.videointelligence.v1p3beta1.Track", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="segment", - full_name="google.cloud.videointelligence.v1p3beta1.Track.segment", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="timestamped_objects", - full_name="google.cloud.videointelligence.v1p3beta1.Track.timestamped_objects", - index=1, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="attributes", - full_name="google.cloud.videointelligence.v1p3beta1.Track.attributes", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1p3beta1.Track.confidence", - index=3, - number=4, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3525, - serialized_end=3806, -) - - -_DETECTEDATTRIBUTE = _descriptor.Descriptor( - name="DetectedAttribute", - full_name="google.cloud.videointelligence.v1p3beta1.DetectedAttribute", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="name", - full_name="google.cloud.videointelligence.v1p3beta1.DetectedAttribute.name", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1p3beta1.DetectedAttribute.confidence", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="value", - full_name="google.cloud.videointelligence.v1p3beta1.DetectedAttribute.value", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3808, - serialized_end=3876, -) - - -_CELEBRITY = _descriptor.Descriptor( - name="Celebrity", - full_name="google.cloud.videointelligence.v1p3beta1.Celebrity", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="name", - full_name="google.cloud.videointelligence.v1p3beta1.Celebrity.name", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="display_name", - full_name="google.cloud.videointelligence.v1p3beta1.Celebrity.display_name", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="description", - full_name="google.cloud.videointelligence.v1p3beta1.Celebrity.description", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3878, - serialized_end=3946, -) - - -_CELEBRITYTRACK_RECOGNIZEDCELEBRITY = _descriptor.Descriptor( - name="RecognizedCelebrity", - full_name="google.cloud.videointelligence.v1p3beta1.CelebrityTrack.RecognizedCelebrity", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="celebrity", - full_name="google.cloud.videointelligence.v1p3beta1.CelebrityTrack.RecognizedCelebrity.celebrity", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1p3beta1.CelebrityTrack.RecognizedCelebrity.confidence", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=4135, - serialized_end=4248, -) - -_CELEBRITYTRACK = _descriptor.Descriptor( - name="CelebrityTrack", - full_name="google.cloud.videointelligence.v1p3beta1.CelebrityTrack", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="celebrities", - full_name="google.cloud.videointelligence.v1p3beta1.CelebrityTrack.celebrities", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="face_track", - full_name="google.cloud.videointelligence.v1p3beta1.CelebrityTrack.face_track", - index=1, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[_CELEBRITYTRACK_RECOGNIZEDCELEBRITY,], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=3949, - serialized_end=4248, -) - - -_CELEBRITYRECOGNITIONANNOTATION = _descriptor.Descriptor( - name="CelebrityRecognitionAnnotation", - full_name="google.cloud.videointelligence.v1p3beta1.CelebrityRecognitionAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="celebrity_tracks", - full_name="google.cloud.videointelligence.v1p3beta1.CelebrityRecognitionAnnotation.celebrity_tracks", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=4250, - serialized_end=4366, -) - - -_DETECTEDLANDMARK = _descriptor.Descriptor( - name="DetectedLandmark", - full_name="google.cloud.videointelligence.v1p3beta1.DetectedLandmark", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="name", - full_name="google.cloud.videointelligence.v1p3beta1.DetectedLandmark.name", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="point", - full_name="google.cloud.videointelligence.v1p3beta1.DetectedLandmark.point", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1p3beta1.DetectedLandmark.confidence", - index=2, - number=3, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=4368, - serialized_end=4495, -) - - -_FACEDETECTIONANNOTATION = _descriptor.Descriptor( - name="FaceDetectionAnnotation", - full_name="google.cloud.videointelligence.v1p3beta1.FaceDetectionAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="tracks", - full_name="google.cloud.videointelligence.v1p3beta1.FaceDetectionAnnotation.tracks", - index=0, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="thumbnail", - full_name="google.cloud.videointelligence.v1p3beta1.FaceDetectionAnnotation.thumbnail", - index=1, - number=4, - type=12, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"", - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=4497, - serialized_end=4606, -) - - -_PERSONDETECTIONANNOTATION = _descriptor.Descriptor( - name="PersonDetectionAnnotation", - full_name="google.cloud.videointelligence.v1p3beta1.PersonDetectionAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="tracks", - full_name="google.cloud.videointelligence.v1p3beta1.PersonDetectionAnnotation.tracks", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=4608, - serialized_end=4700, -) - - -_VIDEOANNOTATIONRESULTS = _descriptor.Descriptor( - name="VideoAnnotationResults", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationResults", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="input_uri", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationResults.input_uri", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segment", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationResults.segment", - index=1, - number=10, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segment_label_annotations", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationResults.segment_label_annotations", - index=2, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segment_presence_label_annotations", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationResults.segment_presence_label_annotations", - index=3, - number=23, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="shot_label_annotations", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationResults.shot_label_annotations", - index=4, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="shot_presence_label_annotations", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationResults.shot_presence_label_annotations", - index=5, - number=24, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="frame_label_annotations", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationResults.frame_label_annotations", - index=6, - number=4, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="face_detection_annotations", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationResults.face_detection_annotations", - index=7, - number=13, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="shot_annotations", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationResults.shot_annotations", - index=8, - number=6, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="explicit_annotation", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationResults.explicit_annotation", - index=9, - number=7, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="speech_transcriptions", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationResults.speech_transcriptions", - index=10, - number=11, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="text_annotations", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationResults.text_annotations", - index=11, - number=12, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="object_annotations", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationResults.object_annotations", - index=12, - number=14, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="logo_recognition_annotations", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationResults.logo_recognition_annotations", - index=13, - number=19, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="person_detection_annotations", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationResults.person_detection_annotations", - index=14, - number=20, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="celebrity_recognition_annotations", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationResults.celebrity_recognition_annotations", - index=15, - number=21, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="error", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationResults.error", - index=16, - number=9, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=4703, - serialized_end=6222, -) - - -_ANNOTATEVIDEORESPONSE = _descriptor.Descriptor( - name="AnnotateVideoResponse", - full_name="google.cloud.videointelligence.v1p3beta1.AnnotateVideoResponse", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="annotation_results", - full_name="google.cloud.videointelligence.v1p3beta1.AnnotateVideoResponse.annotation_results", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=6224, - serialized_end=6341, -) - - -_VIDEOANNOTATIONPROGRESS = _descriptor.Descriptor( - name="VideoAnnotationProgress", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationProgress", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="input_uri", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationProgress.input_uri", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="progress_percent", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationProgress.progress_percent", - index=1, - number=2, - type=5, - cpp_type=1, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="start_time", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationProgress.start_time", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="update_time", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationProgress.update_time", - index=3, - number=4, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="feature", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationProgress.feature", - index=4, - number=5, - type=14, - cpp_type=8, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segment", - full_name="google.cloud.videointelligence.v1p3beta1.VideoAnnotationProgress.segment", - index=5, - number=6, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=6344, - serialized_end=6652, -) - - -_ANNOTATEVIDEOPROGRESS = _descriptor.Descriptor( - name="AnnotateVideoProgress", - full_name="google.cloud.videointelligence.v1p3beta1.AnnotateVideoProgress", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="annotation_progress", - full_name="google.cloud.videointelligence.v1p3beta1.AnnotateVideoProgress.annotation_progress", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=6654, - serialized_end=6773, -) - - -_SPEECHTRANSCRIPTIONCONFIG = _descriptor.Descriptor( - name="SpeechTranscriptionConfig", - full_name="google.cloud.videointelligence.v1p3beta1.SpeechTranscriptionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="language_code", - full_name="google.cloud.videointelligence.v1p3beta1.SpeechTranscriptionConfig.language_code", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\002", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="max_alternatives", - full_name="google.cloud.videointelligence.v1p3beta1.SpeechTranscriptionConfig.max_alternatives", - index=1, - number=2, - type=5, - cpp_type=1, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="filter_profanity", - full_name="google.cloud.videointelligence.v1p3beta1.SpeechTranscriptionConfig.filter_profanity", - index=2, - number=3, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="speech_contexts", - full_name="google.cloud.videointelligence.v1p3beta1.SpeechTranscriptionConfig.speech_contexts", - index=3, - number=4, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="enable_automatic_punctuation", - full_name="google.cloud.videointelligence.v1p3beta1.SpeechTranscriptionConfig.enable_automatic_punctuation", - index=4, - number=5, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="audio_tracks", - full_name="google.cloud.videointelligence.v1p3beta1.SpeechTranscriptionConfig.audio_tracks", - index=5, - number=6, - type=5, - cpp_type=1, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="enable_speaker_diarization", - full_name="google.cloud.videointelligence.v1p3beta1.SpeechTranscriptionConfig.enable_speaker_diarization", - index=6, - number=7, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="diarization_speaker_count", - full_name="google.cloud.videointelligence.v1p3beta1.SpeechTranscriptionConfig.diarization_speaker_count", - index=7, - number=8, - type=5, - cpp_type=1, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="enable_word_confidence", - full_name="google.cloud.videointelligence.v1p3beta1.SpeechTranscriptionConfig.enable_word_confidence", - index=8, - number=9, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=6776, - serialized_end=7168, -) - - -_SPEECHCONTEXT = _descriptor.Descriptor( - name="SpeechContext", - full_name="google.cloud.videointelligence.v1p3beta1.SpeechContext", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="phrases", - full_name="google.cloud.videointelligence.v1p3beta1.SpeechContext.phrases", - index=0, - number=1, - type=9, - cpp_type=9, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\001", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=7170, - serialized_end=7207, -) - - -_SPEECHTRANSCRIPTION = _descriptor.Descriptor( - name="SpeechTranscription", - full_name="google.cloud.videointelligence.v1p3beta1.SpeechTranscription", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="alternatives", - full_name="google.cloud.videointelligence.v1p3beta1.SpeechTranscription.alternatives", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="language_code", - full_name="google.cloud.videointelligence.v1p3beta1.SpeechTranscription.language_code", - index=1, - number=2, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\003", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=7210, - serialized_end=7353, -) - - -_SPEECHRECOGNITIONALTERNATIVE = _descriptor.Descriptor( - name="SpeechRecognitionAlternative", - full_name="google.cloud.videointelligence.v1p3beta1.SpeechRecognitionAlternative", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="transcript", - full_name="google.cloud.videointelligence.v1p3beta1.SpeechRecognitionAlternative.transcript", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1p3beta1.SpeechRecognitionAlternative.confidence", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\003", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="words", - full_name="google.cloud.videointelligence.v1p3beta1.SpeechRecognitionAlternative.words", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\003", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=7356, - serialized_end=7503, -) - - -_WORDINFO = _descriptor.Descriptor( - name="WordInfo", - full_name="google.cloud.videointelligence.v1p3beta1.WordInfo", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="start_time", - full_name="google.cloud.videointelligence.v1p3beta1.WordInfo.start_time", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="end_time", - full_name="google.cloud.videointelligence.v1p3beta1.WordInfo.end_time", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="word", - full_name="google.cloud.videointelligence.v1p3beta1.WordInfo.word", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1p3beta1.WordInfo.confidence", - index=3, - number=4, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\003", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="speaker_tag", - full_name="google.cloud.videointelligence.v1p3beta1.WordInfo.speaker_tag", - index=4, - number=5, - type=5, - cpp_type=1, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=b"\340A\003", - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=7506, - serialized_end=7673, -) - - -_NORMALIZEDVERTEX = _descriptor.Descriptor( - name="NormalizedVertex", - full_name="google.cloud.videointelligence.v1p3beta1.NormalizedVertex", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="x", - full_name="google.cloud.videointelligence.v1p3beta1.NormalizedVertex.x", - index=0, - number=1, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="y", - full_name="google.cloud.videointelligence.v1p3beta1.NormalizedVertex.y", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=7675, - serialized_end=7715, -) - - -_NORMALIZEDBOUNDINGPOLY = _descriptor.Descriptor( - name="NormalizedBoundingPoly", - full_name="google.cloud.videointelligence.v1p3beta1.NormalizedBoundingPoly", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="vertices", - full_name="google.cloud.videointelligence.v1p3beta1.NormalizedBoundingPoly.vertices", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=7717, - serialized_end=7819, -) - - -_TEXTSEGMENT = _descriptor.Descriptor( - name="TextSegment", - full_name="google.cloud.videointelligence.v1p3beta1.TextSegment", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="segment", - full_name="google.cloud.videointelligence.v1p3beta1.TextSegment.segment", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1p3beta1.TextSegment.confidence", - index=1, - number=2, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="frames", - full_name="google.cloud.videointelligence.v1p3beta1.TextSegment.frames", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=7822, - serialized_end=7997, -) - - -_TEXTFRAME = _descriptor.Descriptor( - name="TextFrame", - full_name="google.cloud.videointelligence.v1p3beta1.TextFrame", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="rotated_bounding_box", - full_name="google.cloud.videointelligence.v1p3beta1.TextFrame.rotated_bounding_box", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="time_offset", - full_name="google.cloud.videointelligence.v1p3beta1.TextFrame.time_offset", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=8000, - serialized_end=8155, -) - - -_TEXTANNOTATION = _descriptor.Descriptor( - name="TextAnnotation", - full_name="google.cloud.videointelligence.v1p3beta1.TextAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="text", - full_name="google.cloud.videointelligence.v1p3beta1.TextAnnotation.text", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segments", - full_name="google.cloud.videointelligence.v1p3beta1.TextAnnotation.segments", - index=1, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=8157, - serialized_end=8260, -) - - -_OBJECTTRACKINGFRAME = _descriptor.Descriptor( - name="ObjectTrackingFrame", - full_name="google.cloud.videointelligence.v1p3beta1.ObjectTrackingFrame", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="normalized_bounding_box", - full_name="google.cloud.videointelligence.v1p3beta1.ObjectTrackingFrame.normalized_bounding_box", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="time_offset", - full_name="google.cloud.videointelligence.v1p3beta1.ObjectTrackingFrame.time_offset", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=8263, - serialized_end=8430, -) - - -_OBJECTTRACKINGANNOTATION = _descriptor.Descriptor( - name="ObjectTrackingAnnotation", - full_name="google.cloud.videointelligence.v1p3beta1.ObjectTrackingAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="segment", - full_name="google.cloud.videointelligence.v1p3beta1.ObjectTrackingAnnotation.segment", - index=0, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="track_id", - full_name="google.cloud.videointelligence.v1p3beta1.ObjectTrackingAnnotation.track_id", - index=1, - number=5, - type=3, - cpp_type=2, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="entity", - full_name="google.cloud.videointelligence.v1p3beta1.ObjectTrackingAnnotation.entity", - index=2, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="confidence", - full_name="google.cloud.videointelligence.v1p3beta1.ObjectTrackingAnnotation.confidence", - index=3, - number=4, - type=2, - cpp_type=6, - label=1, - has_default_value=False, - default_value=float(0), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="frames", - full_name="google.cloud.videointelligence.v1p3beta1.ObjectTrackingAnnotation.frames", - index=4, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[ - _descriptor.OneofDescriptor( - name="track_info", - full_name="google.cloud.videointelligence.v1p3beta1.ObjectTrackingAnnotation.track_info", - index=0, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[], - ), - ], - serialized_start=8433, - serialized_end=8733, -) - - -_LOGORECOGNITIONANNOTATION = _descriptor.Descriptor( - name="LogoRecognitionAnnotation", - full_name="google.cloud.videointelligence.v1p3beta1.LogoRecognitionAnnotation", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="entity", - full_name="google.cloud.videointelligence.v1p3beta1.LogoRecognitionAnnotation.entity", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="tracks", - full_name="google.cloud.videointelligence.v1p3beta1.LogoRecognitionAnnotation.tracks", - index=1, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="segments", - full_name="google.cloud.videointelligence.v1p3beta1.LogoRecognitionAnnotation.segments", - index=2, - number=3, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=8736, - serialized_end=8968, -) - - -_STREAMINGANNOTATEVIDEOREQUEST = _descriptor.Descriptor( - name="StreamingAnnotateVideoRequest", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingAnnotateVideoRequest", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="video_config", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingAnnotateVideoRequest.video_config", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="input_content", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingAnnotateVideoRequest.input_content", - index=1, - number=2, - type=12, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"", - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[ - _descriptor.OneofDescriptor( - name="streaming_request", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingAnnotateVideoRequest.streaming_request", - index=0, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[], - ), - ], - serialized_start=8971, - serialized_end=9136, -) - - -_STREAMINGVIDEOCONFIG = _descriptor.Descriptor( - name="StreamingVideoConfig", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingVideoConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="shot_change_detection_config", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingVideoConfig.shot_change_detection_config", - index=0, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="label_detection_config", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingVideoConfig.label_detection_config", - index=1, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="explicit_content_detection_config", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingVideoConfig.explicit_content_detection_config", - index=2, - number=4, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="object_tracking_config", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingVideoConfig.object_tracking_config", - index=3, - number=5, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="automl_action_recognition_config", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingVideoConfig.automl_action_recognition_config", - index=4, - number=23, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="automl_classification_config", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingVideoConfig.automl_classification_config", - index=5, - number=21, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="automl_object_tracking_config", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingVideoConfig.automl_object_tracking_config", - index=6, - number=22, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="feature", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingVideoConfig.feature", - index=7, - number=1, - type=14, - cpp_type=8, - label=1, - has_default_value=False, - default_value=0, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="storage_config", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingVideoConfig.storage_config", - index=8, - number=30, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[ - _descriptor.OneofDescriptor( - name="streaming_config", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingVideoConfig.streaming_config", - index=0, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[], - ), - ], - serialized_start=9139, - serialized_end=10173, -) - - -_STREAMINGANNOTATEVIDEORESPONSE = _descriptor.Descriptor( - name="StreamingAnnotateVideoResponse", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingAnnotateVideoResponse", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="error", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingAnnotateVideoResponse.error", - index=0, - number=1, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="annotation_results", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingAnnotateVideoResponse.annotation_results", - index=1, - number=2, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="annotation_results_uri", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingAnnotateVideoResponse.annotation_results_uri", - index=2, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=10176, - serialized_end=10378, -) - - -_STREAMINGVIDEOANNOTATIONRESULTS = _descriptor.Descriptor( - name="StreamingVideoAnnotationResults", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingVideoAnnotationResults", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="shot_annotations", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingVideoAnnotationResults.shot_annotations", - index=0, - number=1, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="label_annotations", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingVideoAnnotationResults.label_annotations", - index=1, - number=2, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="explicit_annotation", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingVideoAnnotationResults.explicit_annotation", - index=2, - number=3, - type=11, - cpp_type=10, - label=1, - has_default_value=False, - default_value=None, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="object_annotations", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingVideoAnnotationResults.object_annotations", - index=3, - number=4, - type=11, - cpp_type=10, - label=3, - has_default_value=False, - default_value=[], - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=10381, - serialized_end=10776, -) - - -_STREAMINGSHOTCHANGEDETECTIONCONFIG = _descriptor.Descriptor( - name="StreamingShotChangeDetectionConfig", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingShotChangeDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=10778, - serialized_end=10814, -) - - -_STREAMINGLABELDETECTIONCONFIG = _descriptor.Descriptor( - name="StreamingLabelDetectionConfig", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingLabelDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="stationary_camera", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingLabelDetectionConfig.stationary_camera", - index=0, - number=1, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=10816, - serialized_end=10874, -) - - -_STREAMINGEXPLICITCONTENTDETECTIONCONFIG = _descriptor.Descriptor( - name="StreamingExplicitContentDetectionConfig", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingExplicitContentDetectionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=10876, - serialized_end=10917, -) - - -_STREAMINGOBJECTTRACKINGCONFIG = _descriptor.Descriptor( - name="StreamingObjectTrackingConfig", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingObjectTrackingConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=10919, - serialized_end=10950, -) - - -_STREAMINGAUTOMLACTIONRECOGNITIONCONFIG = _descriptor.Descriptor( - name="StreamingAutomlActionRecognitionConfig", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingAutomlActionRecognitionConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="model_name", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingAutomlActionRecognitionConfig.model_name", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=10952, - serialized_end=11012, -) - - -_STREAMINGAUTOMLCLASSIFICATIONCONFIG = _descriptor.Descriptor( - name="StreamingAutomlClassificationConfig", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingAutomlClassificationConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="model_name", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingAutomlClassificationConfig.model_name", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=11014, - serialized_end=11071, -) - - -_STREAMINGAUTOMLOBJECTTRACKINGCONFIG = _descriptor.Descriptor( - name="StreamingAutomlObjectTrackingConfig", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingAutomlObjectTrackingConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="model_name", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingAutomlObjectTrackingConfig.model_name", - index=0, - number=1, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=11073, - serialized_end=11130, -) - - -_STREAMINGSTORAGECONFIG = _descriptor.Descriptor( - name="StreamingStorageConfig", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingStorageConfig", - filename=None, - file=DESCRIPTOR, - containing_type=None, - create_key=_descriptor._internal_create_key, - fields=[ - _descriptor.FieldDescriptor( - name="enable_storage_annotation_result", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingStorageConfig.enable_storage_annotation_result", - index=0, - number=1, - type=8, - cpp_type=7, - label=1, - has_default_value=False, - default_value=False, - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - _descriptor.FieldDescriptor( - name="annotation_result_storage_directory", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingStorageConfig.annotation_result_storage_directory", - index=1, - number=3, - type=9, - cpp_type=9, - label=1, - has_default_value=False, - default_value=b"".decode("utf-8"), - message_type=None, - enum_type=None, - containing_type=None, - is_extension=False, - extension_scope=None, - serialized_options=None, - file=DESCRIPTOR, - create_key=_descriptor._internal_create_key, - ), - ], - extensions=[], - nested_types=[], - enum_types=[], - serialized_options=None, - is_extendable=False, - syntax="proto3", - extension_ranges=[], - oneofs=[], - serialized_start=11132, - serialized_end=11243, -) - -_ANNOTATEVIDEOREQUEST.fields_by_name["features"].enum_type = _FEATURE -_ANNOTATEVIDEOREQUEST.fields_by_name["video_context"].message_type = _VIDEOCONTEXT -_VIDEOCONTEXT.fields_by_name["segments"].message_type = _VIDEOSEGMENT -_VIDEOCONTEXT.fields_by_name[ - "label_detection_config" -].message_type = _LABELDETECTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "shot_change_detection_config" -].message_type = _SHOTCHANGEDETECTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "explicit_content_detection_config" -].message_type = _EXPLICITCONTENTDETECTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "face_detection_config" -].message_type = _FACEDETECTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "speech_transcription_config" -].message_type = _SPEECHTRANSCRIPTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "text_detection_config" -].message_type = _TEXTDETECTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "person_detection_config" -].message_type = _PERSONDETECTIONCONFIG -_VIDEOCONTEXT.fields_by_name[ - "object_tracking_config" -].message_type = _OBJECTTRACKINGCONFIG -_LABELDETECTIONCONFIG.fields_by_name[ - "label_detection_mode" -].enum_type = _LABELDETECTIONMODE -_VIDEOSEGMENT.fields_by_name[ - "start_time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_VIDEOSEGMENT.fields_by_name[ - "end_time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_LABELSEGMENT.fields_by_name["segment"].message_type = _VIDEOSEGMENT -_LABELFRAME.fields_by_name[ - "time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_LABELANNOTATION.fields_by_name["entity"].message_type = _ENTITY -_LABELANNOTATION.fields_by_name["category_entities"].message_type = _ENTITY -_LABELANNOTATION.fields_by_name["segments"].message_type = _LABELSEGMENT -_LABELANNOTATION.fields_by_name["frames"].message_type = _LABELFRAME -_EXPLICITCONTENTFRAME.fields_by_name[ - "time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_EXPLICITCONTENTFRAME.fields_by_name["pornography_likelihood"].enum_type = _LIKELIHOOD -_EXPLICITCONTENTANNOTATION.fields_by_name["frames"].message_type = _EXPLICITCONTENTFRAME -_TIMESTAMPEDOBJECT.fields_by_name[ - "normalized_bounding_box" -].message_type = _NORMALIZEDBOUNDINGBOX -_TIMESTAMPEDOBJECT.fields_by_name[ - "time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_TIMESTAMPEDOBJECT.fields_by_name["attributes"].message_type = _DETECTEDATTRIBUTE -_TIMESTAMPEDOBJECT.fields_by_name["landmarks"].message_type = _DETECTEDLANDMARK -_TRACK.fields_by_name["segment"].message_type = _VIDEOSEGMENT -_TRACK.fields_by_name["timestamped_objects"].message_type = _TIMESTAMPEDOBJECT -_TRACK.fields_by_name["attributes"].message_type = _DETECTEDATTRIBUTE -_CELEBRITYTRACK_RECOGNIZEDCELEBRITY.fields_by_name[ - "celebrity" -].message_type = _CELEBRITY -_CELEBRITYTRACK_RECOGNIZEDCELEBRITY.containing_type = _CELEBRITYTRACK -_CELEBRITYTRACK.fields_by_name[ - "celebrities" -].message_type = _CELEBRITYTRACK_RECOGNIZEDCELEBRITY -_CELEBRITYTRACK.fields_by_name["face_track"].message_type = _TRACK -_CELEBRITYRECOGNITIONANNOTATION.fields_by_name[ - "celebrity_tracks" -].message_type = _CELEBRITYTRACK -_DETECTEDLANDMARK.fields_by_name["point"].message_type = _NORMALIZEDVERTEX -_FACEDETECTIONANNOTATION.fields_by_name["tracks"].message_type = _TRACK -_PERSONDETECTIONANNOTATION.fields_by_name["tracks"].message_type = _TRACK -_VIDEOANNOTATIONRESULTS.fields_by_name["segment"].message_type = _VIDEOSEGMENT -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "segment_label_annotations" -].message_type = _LABELANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "segment_presence_label_annotations" -].message_type = _LABELANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "shot_label_annotations" -].message_type = _LABELANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "shot_presence_label_annotations" -].message_type = _LABELANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "frame_label_annotations" -].message_type = _LABELANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "face_detection_annotations" -].message_type = _FACEDETECTIONANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name["shot_annotations"].message_type = _VIDEOSEGMENT -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "explicit_annotation" -].message_type = _EXPLICITCONTENTANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "speech_transcriptions" -].message_type = _SPEECHTRANSCRIPTION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "text_annotations" -].message_type = _TEXTANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "object_annotations" -].message_type = _OBJECTTRACKINGANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "logo_recognition_annotations" -].message_type = _LOGORECOGNITIONANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "person_detection_annotations" -].message_type = _PERSONDETECTIONANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "celebrity_recognition_annotations" -].message_type = _CELEBRITYRECOGNITIONANNOTATION -_VIDEOANNOTATIONRESULTS.fields_by_name[ - "error" -].message_type = google_dot_rpc_dot_status__pb2._STATUS -_ANNOTATEVIDEORESPONSE.fields_by_name[ - "annotation_results" -].message_type = _VIDEOANNOTATIONRESULTS -_VIDEOANNOTATIONPROGRESS.fields_by_name[ - "start_time" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -_VIDEOANNOTATIONPROGRESS.fields_by_name[ - "update_time" -].message_type = google_dot_protobuf_dot_timestamp__pb2._TIMESTAMP -_VIDEOANNOTATIONPROGRESS.fields_by_name["feature"].enum_type = _FEATURE -_VIDEOANNOTATIONPROGRESS.fields_by_name["segment"].message_type = _VIDEOSEGMENT -_ANNOTATEVIDEOPROGRESS.fields_by_name[ - "annotation_progress" -].message_type = _VIDEOANNOTATIONPROGRESS -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name[ - "speech_contexts" -].message_type = _SPEECHCONTEXT -_SPEECHTRANSCRIPTION.fields_by_name[ - "alternatives" -].message_type = _SPEECHRECOGNITIONALTERNATIVE -_SPEECHRECOGNITIONALTERNATIVE.fields_by_name["words"].message_type = _WORDINFO -_WORDINFO.fields_by_name[ - "start_time" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_WORDINFO.fields_by_name[ - "end_time" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_NORMALIZEDBOUNDINGPOLY.fields_by_name["vertices"].message_type = _NORMALIZEDVERTEX -_TEXTSEGMENT.fields_by_name["segment"].message_type = _VIDEOSEGMENT -_TEXTSEGMENT.fields_by_name["frames"].message_type = _TEXTFRAME -_TEXTFRAME.fields_by_name["rotated_bounding_box"].message_type = _NORMALIZEDBOUNDINGPOLY -_TEXTFRAME.fields_by_name[ - "time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_TEXTANNOTATION.fields_by_name["segments"].message_type = _TEXTSEGMENT -_OBJECTTRACKINGFRAME.fields_by_name[ - "normalized_bounding_box" -].message_type = _NORMALIZEDBOUNDINGBOX -_OBJECTTRACKINGFRAME.fields_by_name[ - "time_offset" -].message_type = google_dot_protobuf_dot_duration__pb2._DURATION -_OBJECTTRACKINGANNOTATION.fields_by_name["segment"].message_type = _VIDEOSEGMENT -_OBJECTTRACKINGANNOTATION.fields_by_name["entity"].message_type = _ENTITY -_OBJECTTRACKINGANNOTATION.fields_by_name["frames"].message_type = _OBJECTTRACKINGFRAME -_OBJECTTRACKINGANNOTATION.oneofs_by_name["track_info"].fields.append( - _OBJECTTRACKINGANNOTATION.fields_by_name["segment"] -) -_OBJECTTRACKINGANNOTATION.fields_by_name[ - "segment" -].containing_oneof = _OBJECTTRACKINGANNOTATION.oneofs_by_name["track_info"] -_OBJECTTRACKINGANNOTATION.oneofs_by_name["track_info"].fields.append( - _OBJECTTRACKINGANNOTATION.fields_by_name["track_id"] -) -_OBJECTTRACKINGANNOTATION.fields_by_name[ - "track_id" -].containing_oneof = _OBJECTTRACKINGANNOTATION.oneofs_by_name["track_info"] -_LOGORECOGNITIONANNOTATION.fields_by_name["entity"].message_type = _ENTITY -_LOGORECOGNITIONANNOTATION.fields_by_name["tracks"].message_type = _TRACK -_LOGORECOGNITIONANNOTATION.fields_by_name["segments"].message_type = _VIDEOSEGMENT -_STREAMINGANNOTATEVIDEOREQUEST.fields_by_name[ - "video_config" -].message_type = _STREAMINGVIDEOCONFIG -_STREAMINGANNOTATEVIDEOREQUEST.oneofs_by_name["streaming_request"].fields.append( - _STREAMINGANNOTATEVIDEOREQUEST.fields_by_name["video_config"] -) -_STREAMINGANNOTATEVIDEOREQUEST.fields_by_name[ - "video_config" -].containing_oneof = _STREAMINGANNOTATEVIDEOREQUEST.oneofs_by_name["streaming_request"] -_STREAMINGANNOTATEVIDEOREQUEST.oneofs_by_name["streaming_request"].fields.append( - _STREAMINGANNOTATEVIDEOREQUEST.fields_by_name["input_content"] -) -_STREAMINGANNOTATEVIDEOREQUEST.fields_by_name[ - "input_content" -].containing_oneof = _STREAMINGANNOTATEVIDEOREQUEST.oneofs_by_name["streaming_request"] -_STREAMINGVIDEOCONFIG.fields_by_name[ - "shot_change_detection_config" -].message_type = _STREAMINGSHOTCHANGEDETECTIONCONFIG -_STREAMINGVIDEOCONFIG.fields_by_name[ - "label_detection_config" -].message_type = _STREAMINGLABELDETECTIONCONFIG -_STREAMINGVIDEOCONFIG.fields_by_name[ - "explicit_content_detection_config" -].message_type = _STREAMINGEXPLICITCONTENTDETECTIONCONFIG -_STREAMINGVIDEOCONFIG.fields_by_name[ - "object_tracking_config" -].message_type = _STREAMINGOBJECTTRACKINGCONFIG -_STREAMINGVIDEOCONFIG.fields_by_name[ - "automl_action_recognition_config" -].message_type = _STREAMINGAUTOMLACTIONRECOGNITIONCONFIG -_STREAMINGVIDEOCONFIG.fields_by_name[ - "automl_classification_config" -].message_type = _STREAMINGAUTOMLCLASSIFICATIONCONFIG -_STREAMINGVIDEOCONFIG.fields_by_name[ - "automl_object_tracking_config" -].message_type = _STREAMINGAUTOMLOBJECTTRACKINGCONFIG -_STREAMINGVIDEOCONFIG.fields_by_name["feature"].enum_type = _STREAMINGFEATURE -_STREAMINGVIDEOCONFIG.fields_by_name[ - "storage_config" -].message_type = _STREAMINGSTORAGECONFIG -_STREAMINGVIDEOCONFIG.oneofs_by_name["streaming_config"].fields.append( - _STREAMINGVIDEOCONFIG.fields_by_name["shot_change_detection_config"] -) -_STREAMINGVIDEOCONFIG.fields_by_name[ - "shot_change_detection_config" -].containing_oneof = _STREAMINGVIDEOCONFIG.oneofs_by_name["streaming_config"] -_STREAMINGVIDEOCONFIG.oneofs_by_name["streaming_config"].fields.append( - _STREAMINGVIDEOCONFIG.fields_by_name["label_detection_config"] -) -_STREAMINGVIDEOCONFIG.fields_by_name[ - "label_detection_config" -].containing_oneof = _STREAMINGVIDEOCONFIG.oneofs_by_name["streaming_config"] -_STREAMINGVIDEOCONFIG.oneofs_by_name["streaming_config"].fields.append( - _STREAMINGVIDEOCONFIG.fields_by_name["explicit_content_detection_config"] -) -_STREAMINGVIDEOCONFIG.fields_by_name[ - "explicit_content_detection_config" -].containing_oneof = _STREAMINGVIDEOCONFIG.oneofs_by_name["streaming_config"] -_STREAMINGVIDEOCONFIG.oneofs_by_name["streaming_config"].fields.append( - _STREAMINGVIDEOCONFIG.fields_by_name["object_tracking_config"] -) -_STREAMINGVIDEOCONFIG.fields_by_name[ - "object_tracking_config" -].containing_oneof = _STREAMINGVIDEOCONFIG.oneofs_by_name["streaming_config"] -_STREAMINGVIDEOCONFIG.oneofs_by_name["streaming_config"].fields.append( - _STREAMINGVIDEOCONFIG.fields_by_name["automl_action_recognition_config"] -) -_STREAMINGVIDEOCONFIG.fields_by_name[ - "automl_action_recognition_config" -].containing_oneof = _STREAMINGVIDEOCONFIG.oneofs_by_name["streaming_config"] -_STREAMINGVIDEOCONFIG.oneofs_by_name["streaming_config"].fields.append( - _STREAMINGVIDEOCONFIG.fields_by_name["automl_classification_config"] -) -_STREAMINGVIDEOCONFIG.fields_by_name[ - "automl_classification_config" -].containing_oneof = _STREAMINGVIDEOCONFIG.oneofs_by_name["streaming_config"] -_STREAMINGVIDEOCONFIG.oneofs_by_name["streaming_config"].fields.append( - _STREAMINGVIDEOCONFIG.fields_by_name["automl_object_tracking_config"] -) -_STREAMINGVIDEOCONFIG.fields_by_name[ - "automl_object_tracking_config" -].containing_oneof = _STREAMINGVIDEOCONFIG.oneofs_by_name["streaming_config"] -_STREAMINGANNOTATEVIDEORESPONSE.fields_by_name[ - "error" -].message_type = google_dot_rpc_dot_status__pb2._STATUS -_STREAMINGANNOTATEVIDEORESPONSE.fields_by_name[ - "annotation_results" -].message_type = _STREAMINGVIDEOANNOTATIONRESULTS -_STREAMINGVIDEOANNOTATIONRESULTS.fields_by_name[ - "shot_annotations" -].message_type = _VIDEOSEGMENT -_STREAMINGVIDEOANNOTATIONRESULTS.fields_by_name[ - "label_annotations" -].message_type = _LABELANNOTATION -_STREAMINGVIDEOANNOTATIONRESULTS.fields_by_name[ - "explicit_annotation" -].message_type = _EXPLICITCONTENTANNOTATION -_STREAMINGVIDEOANNOTATIONRESULTS.fields_by_name[ - "object_annotations" -].message_type = _OBJECTTRACKINGANNOTATION -DESCRIPTOR.message_types_by_name["AnnotateVideoRequest"] = _ANNOTATEVIDEOREQUEST -DESCRIPTOR.message_types_by_name["VideoContext"] = _VIDEOCONTEXT -DESCRIPTOR.message_types_by_name["LabelDetectionConfig"] = _LABELDETECTIONCONFIG -DESCRIPTOR.message_types_by_name[ - "ShotChangeDetectionConfig" -] = _SHOTCHANGEDETECTIONCONFIG -DESCRIPTOR.message_types_by_name["ObjectTrackingConfig"] = _OBJECTTRACKINGCONFIG -DESCRIPTOR.message_types_by_name[ - "ExplicitContentDetectionConfig" -] = _EXPLICITCONTENTDETECTIONCONFIG -DESCRIPTOR.message_types_by_name["FaceDetectionConfig"] = _FACEDETECTIONCONFIG -DESCRIPTOR.message_types_by_name["PersonDetectionConfig"] = _PERSONDETECTIONCONFIG -DESCRIPTOR.message_types_by_name["TextDetectionConfig"] = _TEXTDETECTIONCONFIG -DESCRIPTOR.message_types_by_name["VideoSegment"] = _VIDEOSEGMENT -DESCRIPTOR.message_types_by_name["LabelSegment"] = _LABELSEGMENT -DESCRIPTOR.message_types_by_name["LabelFrame"] = _LABELFRAME -DESCRIPTOR.message_types_by_name["Entity"] = _ENTITY -DESCRIPTOR.message_types_by_name["LabelAnnotation"] = _LABELANNOTATION -DESCRIPTOR.message_types_by_name["ExplicitContentFrame"] = _EXPLICITCONTENTFRAME -DESCRIPTOR.message_types_by_name[ - "ExplicitContentAnnotation" -] = _EXPLICITCONTENTANNOTATION -DESCRIPTOR.message_types_by_name["NormalizedBoundingBox"] = _NORMALIZEDBOUNDINGBOX -DESCRIPTOR.message_types_by_name["TimestampedObject"] = _TIMESTAMPEDOBJECT -DESCRIPTOR.message_types_by_name["Track"] = _TRACK -DESCRIPTOR.message_types_by_name["DetectedAttribute"] = _DETECTEDATTRIBUTE -DESCRIPTOR.message_types_by_name["Celebrity"] = _CELEBRITY -DESCRIPTOR.message_types_by_name["CelebrityTrack"] = _CELEBRITYTRACK -DESCRIPTOR.message_types_by_name[ - "CelebrityRecognitionAnnotation" -] = _CELEBRITYRECOGNITIONANNOTATION -DESCRIPTOR.message_types_by_name["DetectedLandmark"] = _DETECTEDLANDMARK -DESCRIPTOR.message_types_by_name["FaceDetectionAnnotation"] = _FACEDETECTIONANNOTATION -DESCRIPTOR.message_types_by_name[ - "PersonDetectionAnnotation" -] = _PERSONDETECTIONANNOTATION -DESCRIPTOR.message_types_by_name["VideoAnnotationResults"] = _VIDEOANNOTATIONRESULTS -DESCRIPTOR.message_types_by_name["AnnotateVideoResponse"] = _ANNOTATEVIDEORESPONSE -DESCRIPTOR.message_types_by_name["VideoAnnotationProgress"] = _VIDEOANNOTATIONPROGRESS -DESCRIPTOR.message_types_by_name["AnnotateVideoProgress"] = _ANNOTATEVIDEOPROGRESS -DESCRIPTOR.message_types_by_name[ - "SpeechTranscriptionConfig" -] = _SPEECHTRANSCRIPTIONCONFIG -DESCRIPTOR.message_types_by_name["SpeechContext"] = _SPEECHCONTEXT -DESCRIPTOR.message_types_by_name["SpeechTranscription"] = _SPEECHTRANSCRIPTION -DESCRIPTOR.message_types_by_name[ - "SpeechRecognitionAlternative" -] = _SPEECHRECOGNITIONALTERNATIVE -DESCRIPTOR.message_types_by_name["WordInfo"] = _WORDINFO -DESCRIPTOR.message_types_by_name["NormalizedVertex"] = _NORMALIZEDVERTEX -DESCRIPTOR.message_types_by_name["NormalizedBoundingPoly"] = _NORMALIZEDBOUNDINGPOLY -DESCRIPTOR.message_types_by_name["TextSegment"] = _TEXTSEGMENT -DESCRIPTOR.message_types_by_name["TextFrame"] = _TEXTFRAME -DESCRIPTOR.message_types_by_name["TextAnnotation"] = _TEXTANNOTATION -DESCRIPTOR.message_types_by_name["ObjectTrackingFrame"] = _OBJECTTRACKINGFRAME -DESCRIPTOR.message_types_by_name["ObjectTrackingAnnotation"] = _OBJECTTRACKINGANNOTATION -DESCRIPTOR.message_types_by_name[ - "LogoRecognitionAnnotation" -] = _LOGORECOGNITIONANNOTATION -DESCRIPTOR.message_types_by_name[ - "StreamingAnnotateVideoRequest" -] = _STREAMINGANNOTATEVIDEOREQUEST -DESCRIPTOR.message_types_by_name["StreamingVideoConfig"] = _STREAMINGVIDEOCONFIG -DESCRIPTOR.message_types_by_name[ - "StreamingAnnotateVideoResponse" -] = _STREAMINGANNOTATEVIDEORESPONSE -DESCRIPTOR.message_types_by_name[ - "StreamingVideoAnnotationResults" -] = _STREAMINGVIDEOANNOTATIONRESULTS -DESCRIPTOR.message_types_by_name[ - "StreamingShotChangeDetectionConfig" -] = _STREAMINGSHOTCHANGEDETECTIONCONFIG -DESCRIPTOR.message_types_by_name[ - "StreamingLabelDetectionConfig" -] = _STREAMINGLABELDETECTIONCONFIG -DESCRIPTOR.message_types_by_name[ - "StreamingExplicitContentDetectionConfig" -] = _STREAMINGEXPLICITCONTENTDETECTIONCONFIG -DESCRIPTOR.message_types_by_name[ - "StreamingObjectTrackingConfig" -] = _STREAMINGOBJECTTRACKINGCONFIG -DESCRIPTOR.message_types_by_name[ - "StreamingAutomlActionRecognitionConfig" -] = _STREAMINGAUTOMLACTIONRECOGNITIONCONFIG -DESCRIPTOR.message_types_by_name[ - "StreamingAutomlClassificationConfig" -] = _STREAMINGAUTOMLCLASSIFICATIONCONFIG -DESCRIPTOR.message_types_by_name[ - "StreamingAutomlObjectTrackingConfig" -] = _STREAMINGAUTOMLOBJECTTRACKINGCONFIG -DESCRIPTOR.message_types_by_name["StreamingStorageConfig"] = _STREAMINGSTORAGECONFIG -DESCRIPTOR.enum_types_by_name["LabelDetectionMode"] = _LABELDETECTIONMODE -DESCRIPTOR.enum_types_by_name["Likelihood"] = _LIKELIHOOD -DESCRIPTOR.enum_types_by_name["StreamingFeature"] = _STREAMINGFEATURE -DESCRIPTOR.enum_types_by_name["Feature"] = _FEATURE -_sym_db.RegisterFileDescriptor(DESCRIPTOR) - -AnnotateVideoRequest = _reflection.GeneratedProtocolMessageType( - "AnnotateVideoRequest", - (_message.Message,), - { - "DESCRIPTOR": _ANNOTATEVIDEOREQUEST, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Video annotation request. - - - Attributes: - input_uri: - Input video location. Currently, only `Cloud Storage - `__ URIs are supported. - URIs must be specified in the following format: ``gs://bucket- - id/object-id`` (other URI formats return [google.rpc.Code.INVA - LID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For more - information, see `Request URIs - `__. - To identify multiple videos, a video URI may include wildcards - in the ``object-id``. Supported wildcards: ’*’ to match 0 or - more characters; ‘?’ to match 1 character. If unset, the input - video should be embedded in the request as ``input_content``. - If set, ``input_content`` must be unset. - input_content: - The video data bytes. If unset, the input video(s) should be - specified via the ``input_uri``. If set, ``input_uri`` must be - unset. - features: - Required. Requested video annotation features. - video_context: - Additional video context and/or feature-specific parameters. - output_uri: - Optional. Location where the output (in JSON format) should be - stored. Currently, only `Cloud Storage - `__ URIs are supported. - These must be specified in the following format: - ``gs://bucket-id/object-id`` (other URI formats return [google - .rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]) - . For more information, see `Request URIs - `__. - location_id: - Optional. Cloud region where annotation should take place. - Supported cloud regions are: ``us-east1``, ``us-west1``, - ``europe-west1``, ``asia-east1``. If no region is specified, - the region will be determined based on video file location. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.AnnotateVideoRequest) - }, -) -_sym_db.RegisterMessage(AnnotateVideoRequest) - -VideoContext = _reflection.GeneratedProtocolMessageType( - "VideoContext", - (_message.Message,), - { - "DESCRIPTOR": _VIDEOCONTEXT, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Video context and/or feature-specific parameters. - - - Attributes: - segments: - Video segments to annotate. The segments may overlap and are - not required to be contiguous or span the whole video. If - unspecified, each video is treated as a single segment. - label_detection_config: - Config for LABEL_DETECTION. - shot_change_detection_config: - Config for SHOT_CHANGE_DETECTION. - explicit_content_detection_config: - Config for EXPLICIT_CONTENT_DETECTION. - face_detection_config: - Config for FACE_DETECTION. - speech_transcription_config: - Config for SPEECH_TRANSCRIPTION. - text_detection_config: - Config for TEXT_DETECTION. - person_detection_config: - Config for PERSON_DETECTION. - object_tracking_config: - Config for OBJECT_TRACKING. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.VideoContext) - }, -) -_sym_db.RegisterMessage(VideoContext) - -LabelDetectionConfig = _reflection.GeneratedProtocolMessageType( - "LabelDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _LABELDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Config for LABEL_DETECTION. - - - Attributes: - label_detection_mode: - What labels should be detected with LABEL_DETECTION, in - addition to video-level labels or segment-level labels. If - unspecified, defaults to ``SHOT_MODE``. - stationary_camera: - Whether the video has been shot from a stationary (i.e., non- - moving) camera. When set to true, might improve detection - accuracy for moving objects. Should be used with - ``SHOT_AND_FRAME_MODE`` enabled. - model: - Model to use for label detection. Supported values: - “builtin/stable” (the default if unset) and “builtin/latest”. - frame_confidence_threshold: - The confidence threshold we perform filtering on the labels - from frame-level detection. If not set, it is set to 0.4 by - default. The valid range for this threshold is [0.1, 0.9]. Any - value set outside of this range will be clipped. Note: For - best results, follow the default threshold. We will update the - default threshold everytime when we release a new model. - video_confidence_threshold: - The confidence threshold we perform filtering on the labels - from video-level and shot-level detections. If not set, it’s - set to 0.3 by default. The valid range for this threshold is - [0.1, 0.9]. Any value set outside of this range will be - clipped. Note: For best results, follow the default threshold. - We will update the default threshold everytime when we release - a new model. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.LabelDetectionConfig) - }, -) -_sym_db.RegisterMessage(LabelDetectionConfig) - -ShotChangeDetectionConfig = _reflection.GeneratedProtocolMessageType( - "ShotChangeDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _SHOTCHANGEDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Config for SHOT_CHANGE_DETECTION. - - - Attributes: - model: - Model to use for shot change detection. Supported values: - “builtin/stable” (the default if unset) and “builtin/latest”. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.ShotChangeDetectionConfig) - }, -) -_sym_db.RegisterMessage(ShotChangeDetectionConfig) - -ObjectTrackingConfig = _reflection.GeneratedProtocolMessageType( - "ObjectTrackingConfig", - (_message.Message,), - { - "DESCRIPTOR": _OBJECTTRACKINGCONFIG, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Config for OBJECT_TRACKING. - - - Attributes: - model: - Model to use for object tracking. Supported values: - “builtin/stable” (the default if unset) and “builtin/latest”. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.ObjectTrackingConfig) - }, -) -_sym_db.RegisterMessage(ObjectTrackingConfig) - -ExplicitContentDetectionConfig = _reflection.GeneratedProtocolMessageType( - "ExplicitContentDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _EXPLICITCONTENTDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Config for EXPLICIT_CONTENT_DETECTION. - - - Attributes: - model: - Model to use for explicit content detection. Supported values: - “builtin/stable” (the default if unset) and “builtin/latest”. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.ExplicitContentDetectionConfig) - }, -) -_sym_db.RegisterMessage(ExplicitContentDetectionConfig) - -FaceDetectionConfig = _reflection.GeneratedProtocolMessageType( - "FaceDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _FACEDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Config for FACE_DETECTION. - - - Attributes: - model: - Model to use for face detection. Supported values: - “builtin/stable” (the default if unset) and “builtin/latest”. - include_bounding_boxes: - Whether bounding boxes are included in the face annotation - output. - include_attributes: - Whether to enable face attributes detection, such as glasses, - dark_glasses, mouth_open etc. Ignored if - ‘include_bounding_boxes’ is set to false. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.FaceDetectionConfig) - }, -) -_sym_db.RegisterMessage(FaceDetectionConfig) - -PersonDetectionConfig = _reflection.GeneratedProtocolMessageType( - "PersonDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _PERSONDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Config for PERSON_DETECTION. - - - Attributes: - include_bounding_boxes: - Whether bounding boxes are included in the person detection - annotation output. - include_pose_landmarks: - Whether to enable pose landmarks detection. Ignored if - ‘include_bounding_boxes’ is set to false. - include_attributes: - Whether to enable person attributes detection, such as cloth - color (black, blue, etc), type (coat, dress, etc), pattern - (plain, floral, etc), hair, etc. Ignored if - ‘include_bounding_boxes’ is set to false. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.PersonDetectionConfig) - }, -) -_sym_db.RegisterMessage(PersonDetectionConfig) - -TextDetectionConfig = _reflection.GeneratedProtocolMessageType( - "TextDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _TEXTDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Config for TEXT_DETECTION. - - - Attributes: - language_hints: - Language hint can be specified if the language to be detected - is known a priori. It can increase the accuracy of the - detection. Language hint must be language code in BCP-47 - format. Automatic language detection is performed if no hint - is provided. - model: - Model to use for text detection. Supported values: - “builtin/stable” (the default if unset) and “builtin/latest”. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.TextDetectionConfig) - }, -) -_sym_db.RegisterMessage(TextDetectionConfig) - -VideoSegment = _reflection.GeneratedProtocolMessageType( - "VideoSegment", - (_message.Message,), - { - "DESCRIPTOR": _VIDEOSEGMENT, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Video segment. - - - Attributes: - start_time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the start of the segment (inclusive). - end_time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the end of the segment (inclusive). - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.VideoSegment) - }, -) -_sym_db.RegisterMessage(VideoSegment) - -LabelSegment = _reflection.GeneratedProtocolMessageType( - "LabelSegment", - (_message.Message,), - { - "DESCRIPTOR": _LABELSEGMENT, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Video segment level annotation results for label detection. - - - Attributes: - segment: - Video segment where a label was detected. - confidence: - Confidence that the label is accurate. Range: [0, 1]. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.LabelSegment) - }, -) -_sym_db.RegisterMessage(LabelSegment) - -LabelFrame = _reflection.GeneratedProtocolMessageType( - "LabelFrame", - (_message.Message,), - { - "DESCRIPTOR": _LABELFRAME, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Video frame level annotation results for label detection. - - - Attributes: - time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the video frame for this location. - confidence: - Confidence that the label is accurate. Range: [0, 1]. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.LabelFrame) - }, -) -_sym_db.RegisterMessage(LabelFrame) - -Entity = _reflection.GeneratedProtocolMessageType( - "Entity", - (_message.Message,), - { - "DESCRIPTOR": _ENTITY, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Detected entity from video analysis. - - - Attributes: - entity_id: - Opaque entity ID. Some IDs may be available in `Google - Knowledge Graph Search API - `__. - description: - Textual description, e.g., ``Fixed-gear bicycle``. - language_code: - Language code for ``description`` in BCP-47 format. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.Entity) - }, -) -_sym_db.RegisterMessage(Entity) - -LabelAnnotation = _reflection.GeneratedProtocolMessageType( - "LabelAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _LABELANNOTATION, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Label annotation. - - - Attributes: - entity: - Detected entity. - category_entities: - Common categories for the detected entity. For example, when - the label is ``Terrier``, the category is likely ``dog``. And - in some cases there might be more than one categories e.g., - ``Terrier`` could also be a ``pet``. - segments: - All video segments where a label was detected. - frames: - All video frames where a label was detected. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.LabelAnnotation) - }, -) -_sym_db.RegisterMessage(LabelAnnotation) - -ExplicitContentFrame = _reflection.GeneratedProtocolMessageType( - "ExplicitContentFrame", - (_message.Message,), - { - "DESCRIPTOR": _EXPLICITCONTENTFRAME, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Video frame level annotation results for explicit content. - - - Attributes: - time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the video frame for this location. - pornography_likelihood: - Likelihood of the pornography content.. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.ExplicitContentFrame) - }, -) -_sym_db.RegisterMessage(ExplicitContentFrame) - -ExplicitContentAnnotation = _reflection.GeneratedProtocolMessageType( - "ExplicitContentAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _EXPLICITCONTENTANNOTATION, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Explicit content annotation (based on per-frame visual signals only). - If no explicit content has been detected in a frame, no annotations - are present for that frame. - - - Attributes: - frames: - All video frames where explicit content was detected. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.ExplicitContentAnnotation) - }, -) -_sym_db.RegisterMessage(ExplicitContentAnnotation) - -NormalizedBoundingBox = _reflection.GeneratedProtocolMessageType( - "NormalizedBoundingBox", - (_message.Message,), - { - "DESCRIPTOR": _NORMALIZEDBOUNDINGBOX, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Normalized bounding box. The normalized vertex coordinates are - relative to the original image. Range: [0, 1]. - - - Attributes: - left: - Left X coordinate. - top: - Top Y coordinate. - right: - Right X coordinate. - bottom: - Bottom Y coordinate. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.NormalizedBoundingBox) - }, -) -_sym_db.RegisterMessage(NormalizedBoundingBox) - -TimestampedObject = _reflection.GeneratedProtocolMessageType( - "TimestampedObject", - (_message.Message,), - { - "DESCRIPTOR": _TIMESTAMPEDOBJECT, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """For tracking related features. An object at time_offset with - attributes, and located with normalized_bounding_box. - - - Attributes: - normalized_bounding_box: - Normalized Bounding box in a frame, where the object is - located. - time_offset: - Time-offset, relative to the beginning of the video, - corresponding to the video frame for this object. - attributes: - Optional. The attributes of the object in the bounding box. - landmarks: - Optional. The detected landmarks. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.TimestampedObject) - }, -) -_sym_db.RegisterMessage(TimestampedObject) - -Track = _reflection.GeneratedProtocolMessageType( - "Track", - (_message.Message,), - { - "DESCRIPTOR": _TRACK, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """A track of an object instance. - - - Attributes: - segment: - Video segment of a track. - timestamped_objects: - The object with timestamp and attributes per frame in the - track. - attributes: - Optional. Attributes in the track level. - confidence: - Optional. The confidence score of the tracked object. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.Track) - }, -) -_sym_db.RegisterMessage(Track) - -DetectedAttribute = _reflection.GeneratedProtocolMessageType( - "DetectedAttribute", - (_message.Message,), - { - "DESCRIPTOR": _DETECTEDATTRIBUTE, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """A generic detected attribute represented by name in string format. - - - Attributes: - name: - The name of the attribute, for example, glasses, dark_glasses, - mouth_open. A full list of supported type names will be - provided in the document. - confidence: - Detected attribute confidence. Range [0, 1]. - value: - Text value of the detection result. For example, the value for - “HairColor” can be “black”, “blonde”, etc. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.DetectedAttribute) - }, -) -_sym_db.RegisterMessage(DetectedAttribute) - -Celebrity = _reflection.GeneratedProtocolMessageType( - "Celebrity", - (_message.Message,), - { - "DESCRIPTOR": _CELEBRITY, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Celebrity definition. - - - Attributes: - name: - The resource name of the celebrity. Have the format ``video- - intelligence/kg-mid`` indicates a celebrity from preloaded - gallery. kg-mid is the id in Google knowledge graph, which is - unique for the celebrity. - display_name: - The celebrity name. - description: - Textual description of additional information about the - celebrity, if applicable. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.Celebrity) - }, -) -_sym_db.RegisterMessage(Celebrity) - -CelebrityTrack = _reflection.GeneratedProtocolMessageType( - "CelebrityTrack", - (_message.Message,), - { - "RecognizedCelebrity": _reflection.GeneratedProtocolMessageType( - "RecognizedCelebrity", - (_message.Message,), - { - "DESCRIPTOR": _CELEBRITYTRACK_RECOGNIZEDCELEBRITY, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """The recognized celebrity with confidence score. - - - Attributes: - celebrity: - The recognized celebrity. - confidence: - Recognition confidence. Range [0, 1]. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.CelebrityTrack.RecognizedCelebrity) - }, - ), - "DESCRIPTOR": _CELEBRITYTRACK, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """The annotation result of a celebrity face track. RecognizedCelebrity - field could be empty if the face track does not have any matched - celebrities. - - - Attributes: - celebrities: - Top N match of the celebrities for the face in this track. - face_track: - A track of a person’s face. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.CelebrityTrack) - }, -) -_sym_db.RegisterMessage(CelebrityTrack) -_sym_db.RegisterMessage(CelebrityTrack.RecognizedCelebrity) - -CelebrityRecognitionAnnotation = _reflection.GeneratedProtocolMessageType( - "CelebrityRecognitionAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _CELEBRITYRECOGNITIONANNOTATION, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Celebrity recognition annotation per video. - - - Attributes: - celebrity_tracks: - The tracks detected from the input video, including recognized - celebrities and other detected faces in the video. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.CelebrityRecognitionAnnotation) - }, -) -_sym_db.RegisterMessage(CelebrityRecognitionAnnotation) - -DetectedLandmark = _reflection.GeneratedProtocolMessageType( - "DetectedLandmark", - (_message.Message,), - { - "DESCRIPTOR": _DETECTEDLANDMARK, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """A generic detected landmark represented by name in string format and a - 2D location. - - - Attributes: - name: - The name of this landmark, for example, left_hand, - right_shoulder. - point: - The 2D point of the detected landmark using the normalized - image coordindate system. The normalized coordinates have the - range from 0 to 1. - confidence: - The confidence score of the detected landmark. Range [0, 1]. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.DetectedLandmark) - }, -) -_sym_db.RegisterMessage(DetectedLandmark) - -FaceDetectionAnnotation = _reflection.GeneratedProtocolMessageType( - "FaceDetectionAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _FACEDETECTIONANNOTATION, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Face detection annotation. - - - Attributes: - tracks: - The face tracks with attributes. - thumbnail: - The thumbnail of a person’s face. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.FaceDetectionAnnotation) - }, -) -_sym_db.RegisterMessage(FaceDetectionAnnotation) - -PersonDetectionAnnotation = _reflection.GeneratedProtocolMessageType( - "PersonDetectionAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _PERSONDETECTIONANNOTATION, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Person detection annotation per video. - - - Attributes: - tracks: - The detected tracks of a person. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.PersonDetectionAnnotation) - }, -) -_sym_db.RegisterMessage(PersonDetectionAnnotation) - -VideoAnnotationResults = _reflection.GeneratedProtocolMessageType( - "VideoAnnotationResults", - (_message.Message,), - { - "DESCRIPTOR": _VIDEOANNOTATIONRESULTS, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Annotation results for a single video. - - - Attributes: - input_uri: - Video file location in `Cloud Storage - `__. - segment: - Video segment on which the annotation is run. - segment_label_annotations: - Topical label annotations on video level or user-specified - segment level. There is exactly one element for each unique - label. - segment_presence_label_annotations: - Presence label annotations on video level or user-specified - segment level. There is exactly one element for each unique - label. Compared to the existing topical - ``segment_label_annotations``, this field presents more fine- - grained, segment-level labels detected in video content and is - made available only when the client sets - ``LabelDetectionConfig.model`` to “builtin/latest” in the - request. - shot_label_annotations: - Topical label annotations on shot level. There is exactly one - element for each unique label. - shot_presence_label_annotations: - Presence label annotations on shot level. There is exactly one - element for each unique label. Compared to the existing - topical ``shot_label_annotations``, this field presents more - fine-grained, shot-level labels detected in video content and - is made available only when the client sets - ``LabelDetectionConfig.model`` to “builtin/latest” in the - request. - frame_label_annotations: - Label annotations on frame level. There is exactly one element - for each unique label. - face_detection_annotations: - Face detection annotations. - shot_annotations: - Shot annotations. Each shot is represented as a video segment. - explicit_annotation: - Explicit content annotation. - speech_transcriptions: - Speech transcription. - text_annotations: - OCR text detection and tracking. Annotations for list of - detected text snippets. Each will have list of frame - information associated with it. - object_annotations: - Annotations for list of objects detected and tracked in video. - logo_recognition_annotations: - Annotations for list of logos detected, tracked and recognized - in video. - person_detection_annotations: - Person detection annotations. - celebrity_recognition_annotations: - Celebrity recognition annotations. - error: - If set, indicates an error. Note that for a single - ``AnnotateVideoRequest`` some videos may succeed and some may - fail. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.VideoAnnotationResults) - }, -) -_sym_db.RegisterMessage(VideoAnnotationResults) - -AnnotateVideoResponse = _reflection.GeneratedProtocolMessageType( - "AnnotateVideoResponse", - (_message.Message,), - { - "DESCRIPTOR": _ANNOTATEVIDEORESPONSE, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Video annotation response. Included in the ``response`` field of the - ``Operation`` returned by the ``GetOperation`` call of the - ``google::longrunning::Operations`` service. - - - Attributes: - annotation_results: - Annotation results for all videos specified in - ``AnnotateVideoRequest``. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.AnnotateVideoResponse) - }, -) -_sym_db.RegisterMessage(AnnotateVideoResponse) - -VideoAnnotationProgress = _reflection.GeneratedProtocolMessageType( - "VideoAnnotationProgress", - (_message.Message,), - { - "DESCRIPTOR": _VIDEOANNOTATIONPROGRESS, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Annotation progress for a single video. - - - Attributes: - input_uri: - Video file location in `Cloud Storage - `__. - progress_percent: - Approximate percentage processed thus far. Guaranteed to be - 100 when fully processed. - start_time: - Time when the request was received. - update_time: - Time of the most recent update. - feature: - Specifies which feature is being tracked if the request - contains more than one feature. - segment: - Specifies which segment is being tracked if the request - contains more than one segment. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.VideoAnnotationProgress) - }, -) -_sym_db.RegisterMessage(VideoAnnotationProgress) - -AnnotateVideoProgress = _reflection.GeneratedProtocolMessageType( - "AnnotateVideoProgress", - (_message.Message,), - { - "DESCRIPTOR": _ANNOTATEVIDEOPROGRESS, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Video annotation progress. Included in the ``metadata`` field of the - ``Operation`` returned by the ``GetOperation`` call of the - ``google::longrunning::Operations`` service. - - - Attributes: - annotation_progress: - Progress metadata for all videos specified in - ``AnnotateVideoRequest``. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.AnnotateVideoProgress) - }, -) -_sym_db.RegisterMessage(AnnotateVideoProgress) - -SpeechTranscriptionConfig = _reflection.GeneratedProtocolMessageType( - "SpeechTranscriptionConfig", - (_message.Message,), - { - "DESCRIPTOR": _SPEECHTRANSCRIPTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Config for SPEECH_TRANSCRIPTION. - - - Attributes: - language_code: - Required. *Required* The language of the supplied audio as a - `BCP-47 `__ - language tag. Example: “en-US”. See `Language Support - `__ for a list - of the currently supported language codes. - max_alternatives: - Optional. Maximum number of recognition hypotheses to be - returned. Specifically, the maximum number of - ``SpeechRecognitionAlternative`` messages within each - ``SpeechTranscription``. The server may return fewer than - ``max_alternatives``. Valid values are ``0``-``30``. A value - of ``0`` or ``1`` will return a maximum of one. If omitted, - will return a maximum of one. - filter_profanity: - Optional. If set to ``true``, the server will attempt to - filter out profanities, replacing all but the initial - character in each filtered word with asterisks, e.g. "f***". - If set to ``false`` or omitted, profanities won’t be filtered - out. - speech_contexts: - Optional. A means to provide context to assist the speech - recognition. - enable_automatic_punctuation: - Optional. If ‘true’, adds punctuation to recognition result - hypotheses. This feature is only available in select - languages. Setting this for requests in other languages has no - effect at all. The default ‘false’ value does not add - punctuation to result hypotheses. NOTE: “This is currently - offered as an experimental service, complimentary to all - users. In the future this may be exclusively available as a - premium feature.” - audio_tracks: - Optional. For file formats, such as MXF or MKV, supporting - multiple audio tracks, specify up to two tracks. Default: - track 0. - enable_speaker_diarization: - Optional. If ‘true’, enables speaker detection for each - recognized word in the top alternative of the recognition - result using a speaker_tag provided in the WordInfo. Note: - When this is true, we send all the words from the beginning of - the audio for the top alternative in every consecutive - response. This is done in order to improve our speaker tags as - our models learn to identify the speakers in the conversation - over time. - diarization_speaker_count: - Optional. If set, specifies the estimated number of speakers - in the conversation. If not set, defaults to ‘2’. Ignored - unless enable_speaker_diarization is set to true. - enable_word_confidence: - Optional. If ``true``, the top result includes a list of words - and the confidence for those words. If ``false``, no word- - level confidence information is returned. The default is - ``false``. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.SpeechTranscriptionConfig) - }, -) -_sym_db.RegisterMessage(SpeechTranscriptionConfig) - -SpeechContext = _reflection.GeneratedProtocolMessageType( - "SpeechContext", - (_message.Message,), - { - "DESCRIPTOR": _SPEECHCONTEXT, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Provides “hints” to the speech recognizer to favor specific words and - phrases in the results. - - - Attributes: - phrases: - Optional. A list of strings containing words and phrases - “hints” so that the speech recognition is more likely to - recognize them. This can be used to improve the accuracy for - specific words and phrases, for example, if specific commands - are typically spoken by the user. This can also be used to add - additional words to the vocabulary of the recognizer. See - `usage limits - `__. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.SpeechContext) - }, -) -_sym_db.RegisterMessage(SpeechContext) - -SpeechTranscription = _reflection.GeneratedProtocolMessageType( - "SpeechTranscription", - (_message.Message,), - { - "DESCRIPTOR": _SPEECHTRANSCRIPTION, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """A speech recognition result corresponding to a portion of the audio. - - - Attributes: - alternatives: - May contain one or more recognition hypotheses (up to the - maximum specified in ``max_alternatives``). These alternatives - are ordered in terms of accuracy, with the top (first) - alternative being the most probable, as ranked by the - recognizer. - language_code: - Output only. The `BCP-47 `__ language tag of the language - in this result. This language code was detected to have the - most likelihood of being spoken in the audio. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.SpeechTranscription) - }, -) -_sym_db.RegisterMessage(SpeechTranscription) - -SpeechRecognitionAlternative = _reflection.GeneratedProtocolMessageType( - "SpeechRecognitionAlternative", - (_message.Message,), - { - "DESCRIPTOR": _SPEECHRECOGNITIONALTERNATIVE, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Alternative hypotheses (a.k.a. n-best list). - - - Attributes: - transcript: - Transcript text representing the words that the user spoke. - confidence: - Output only. The confidence estimate between 0.0 and 1.0. A - higher number indicates an estimated greater likelihood that - the recognized words are correct. This field is set only for - the top alternative. This field is not guaranteed to be - accurate and users should not rely on it to be always - provided. The default of 0.0 is a sentinel value indicating - ``confidence`` was not set. - words: - Output only. A list of word-specific information for each - recognized word. Note: When ``enable_speaker_diarization`` is - set to true, you will see all the words from the beginning of - the audio. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.SpeechRecognitionAlternative) - }, -) -_sym_db.RegisterMessage(SpeechRecognitionAlternative) - -WordInfo = _reflection.GeneratedProtocolMessageType( - "WordInfo", - (_message.Message,), - { - "DESCRIPTOR": _WORDINFO, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Word-specific information for recognized words. Word information is - only included in the response when certain request parameters are set, - such as ``enable_word_time_offsets``. - - - Attributes: - start_time: - Time offset relative to the beginning of the audio, and - corresponding to the start of the spoken word. This field is - only set if ``enable_word_time_offsets=true`` and only in the - top hypothesis. This is an experimental feature and the - accuracy of the time offset can vary. - end_time: - Time offset relative to the beginning of the audio, and - corresponding to the end of the spoken word. This field is - only set if ``enable_word_time_offsets=true`` and only in the - top hypothesis. This is an experimental feature and the - accuracy of the time offset can vary. - word: - The word corresponding to this set of information. - confidence: - Output only. The confidence estimate between 0.0 and 1.0. A - higher number indicates an estimated greater likelihood that - the recognized words are correct. This field is set only for - the top alternative. This field is not guaranteed to be - accurate and users should not rely on it to be always - provided. The default of 0.0 is a sentinel value indicating - ``confidence`` was not set. - speaker_tag: - Output only. A distinct integer value is assigned for every - speaker within the audio. This field specifies which one of - those speakers was detected to have spoken this word. Value - ranges from 1 up to diarization_speaker_count, and is only set - if speaker diarization is enabled. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.WordInfo) - }, -) -_sym_db.RegisterMessage(WordInfo) - -NormalizedVertex = _reflection.GeneratedProtocolMessageType( - "NormalizedVertex", - (_message.Message,), - { - "DESCRIPTOR": _NORMALIZEDVERTEX, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """X coordinate. - - - Attributes: - y: - Y coordinate. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.NormalizedVertex) - }, -) -_sym_db.RegisterMessage(NormalizedVertex) - -NormalizedBoundingPoly = _reflection.GeneratedProtocolMessageType( - "NormalizedBoundingPoly", - (_message.Message,), - { - "DESCRIPTOR": _NORMALIZEDBOUNDINGPOLY, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Normalized bounding polygon for text (that might not be aligned with - axis). Contains list of the corner points in clockwise order starting - from top-left corner. For example, for a rectangular bounding box: - When the text is horizontal it might look like: 0—-1 \| \| 3—-2 When - it’s clockwise rotated 180 degrees around the top-left corner it - becomes: 2—-3 \| \| 1—-0 and the vertex order will still be (0, 1, 2, - 3). Note that values can be less than 0, or greater than 1 due to - trignometric calculations for location of the box. - - - Attributes: - vertices: - Normalized vertices of the bounding polygon. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.NormalizedBoundingPoly) - }, -) -_sym_db.RegisterMessage(NormalizedBoundingPoly) - -TextSegment = _reflection.GeneratedProtocolMessageType( - "TextSegment", - (_message.Message,), - { - "DESCRIPTOR": _TEXTSEGMENT, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Video segment level annotation results for text detection. - - - Attributes: - segment: - Video segment where a text snippet was detected. - confidence: - Confidence for the track of detected text. It is calculated as - the highest over all frames where OCR detected text appears. - frames: - Information related to the frames where OCR detected text - appears. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.TextSegment) - }, -) -_sym_db.RegisterMessage(TextSegment) - -TextFrame = _reflection.GeneratedProtocolMessageType( - "TextFrame", - (_message.Message,), - { - "DESCRIPTOR": _TEXTFRAME, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Video frame level annotation results for text annotation (OCR). - Contains information regarding timestamp and bounding box locations - for the frames containing detected OCR text snippets. - - - Attributes: - rotated_bounding_box: - Bounding polygon of the detected text for this frame. - time_offset: - Timestamp of this frame. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.TextFrame) - }, -) -_sym_db.RegisterMessage(TextFrame) - -TextAnnotation = _reflection.GeneratedProtocolMessageType( - "TextAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _TEXTANNOTATION, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Annotations related to one detected OCR text snippet. This will - contain the corresponding text, confidence value, and frame level - information for each detection. - - - Attributes: - text: - The detected text. - segments: - All video segments where OCR detected text appears. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.TextAnnotation) - }, -) -_sym_db.RegisterMessage(TextAnnotation) - -ObjectTrackingFrame = _reflection.GeneratedProtocolMessageType( - "ObjectTrackingFrame", - (_message.Message,), - { - "DESCRIPTOR": _OBJECTTRACKINGFRAME, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Video frame level annotations for object detection and tracking. This - field stores per frame location, time offset, and confidence. - - - Attributes: - normalized_bounding_box: - The normalized bounding box location of this object track for - the frame. - time_offset: - The timestamp of the frame in microseconds. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.ObjectTrackingFrame) - }, -) -_sym_db.RegisterMessage(ObjectTrackingFrame) - -ObjectTrackingAnnotation = _reflection.GeneratedProtocolMessageType( - "ObjectTrackingAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _OBJECTTRACKINGANNOTATION, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Annotations corresponding to one tracked object. - - - Attributes: - track_info: - Different representation of tracking info in non-streaming - batch and streaming modes. - segment: - Non-streaming batch mode ONLY. Each object track corresponds - to one video segment where it appears. - track_id: - Streaming mode ONLY. In streaming mode, we do not know the end - time of a tracked object before it is completed. Hence, there - is no VideoSegment info returned. Instead, we provide a unique - identifiable integer track_id so that the customers can - correlate the results of the ongoing ObjectTrackAnnotation of - the same track_id over time. - entity: - Entity to specify the object category that this track is - labeled as. - confidence: - Object category’s labeling confidence of this track. - frames: - Information corresponding to all frames where this object - track appears. Non-streaming batch mode: it may be one or - multiple ObjectTrackingFrame messages in frames. Streaming - mode: it can only be one ObjectTrackingFrame message in - frames. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.ObjectTrackingAnnotation) - }, -) -_sym_db.RegisterMessage(ObjectTrackingAnnotation) - -LogoRecognitionAnnotation = _reflection.GeneratedProtocolMessageType( - "LogoRecognitionAnnotation", - (_message.Message,), - { - "DESCRIPTOR": _LOGORECOGNITIONANNOTATION, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Annotation corresponding to one detected, tracked and recognized logo - class. - - - Attributes: - entity: - Entity category information to specify the logo class that all - the logo tracks within this LogoRecognitionAnnotation are - recognized as. - tracks: - All logo tracks where the recognized logo appears. Each track - corresponds to one logo instance appearing in consecutive - frames. - segments: - All video segments where the recognized logo appears. There - might be multiple instances of the same logo class appearing - in one VideoSegment. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.LogoRecognitionAnnotation) - }, -) -_sym_db.RegisterMessage(LogoRecognitionAnnotation) - -StreamingAnnotateVideoRequest = _reflection.GeneratedProtocolMessageType( - "StreamingAnnotateVideoRequest", - (_message.Message,), - { - "DESCRIPTOR": _STREAMINGANNOTATEVIDEOREQUEST, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """The top-level message sent by the client for the - ``StreamingAnnotateVideo`` method. Multiple - ``StreamingAnnotateVideoRequest`` messages are sent. The first message - must only contain a ``StreamingVideoConfig`` message. All subsequent - messages must only contain ``input_content`` data. - - - Attributes: - streaming_request: - \ *Required* The streaming request, which is either a - streaming config or video content. - video_config: - Provides information to the annotator, specifing how to - process the request. The first - ``AnnotateStreamingVideoRequest`` message must only contain a - ``video_config`` message. - input_content: - The video data to be annotated. Chunks of video data are - sequentially sent in ``StreamingAnnotateVideoRequest`` - messages. Except the initial ``StreamingAnnotateVideoRequest`` - message containing only ``video_config``, all subsequent - ``AnnotateStreamingVideoRequest`` messages must only contain - ``input_content`` field. Note: as with all bytes fields, - protobuffers use a pure binary representation (not base64). - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.StreamingAnnotateVideoRequest) - }, -) -_sym_db.RegisterMessage(StreamingAnnotateVideoRequest) - -StreamingVideoConfig = _reflection.GeneratedProtocolMessageType( - "StreamingVideoConfig", - (_message.Message,), - { - "DESCRIPTOR": _STREAMINGVIDEOCONFIG, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Provides information to the annotator that specifies how to process - the request. - - - Attributes: - streaming_config: - Config for requested annotation feature. - shot_change_detection_config: - Config for STREAMING_SHOT_CHANGE_DETECTION. - label_detection_config: - Config for STREAMING_LABEL_DETECTION. - explicit_content_detection_config: - Config for STREAMING_EXPLICIT_CONTENT_DETECTION. - object_tracking_config: - Config for STREAMING_OBJECT_TRACKING. - automl_action_recognition_config: - Config for STREAMING_AUTOML_ACTION_RECOGNITION. - automl_classification_config: - Config for STREAMING_AUTOML_CLASSIFICATION. - automl_object_tracking_config: - Config for STREAMING_AUTOML_OBJECT_TRACKING. - feature: - Requested annotation feature. - storage_config: - Streaming storage option. By default: storage is disabled. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.StreamingVideoConfig) - }, -) -_sym_db.RegisterMessage(StreamingVideoConfig) - -StreamingAnnotateVideoResponse = _reflection.GeneratedProtocolMessageType( - "StreamingAnnotateVideoResponse", - (_message.Message,), - { - "DESCRIPTOR": _STREAMINGANNOTATEVIDEORESPONSE, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """\ ``StreamingAnnotateVideoResponse`` is the only message returned to - the client by ``StreamingAnnotateVideo``. A series of zero or more - ``StreamingAnnotateVideoResponse`` messages are streamed back to the - client. - - - Attributes: - error: - If set, returns a [google.rpc.Status][google.rpc.Status] - message that specifies the error for the operation. - annotation_results: - Streaming annotation results. - annotation_results_uri: - Google Cloud Storage(GCS) URI that stores annotation results - of one streaming session in JSON format. It is the - annotation_result_storage_directory from the request followed - by ‘/cloud_project_number-session_id’. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.StreamingAnnotateVideoResponse) - }, -) -_sym_db.RegisterMessage(StreamingAnnotateVideoResponse) - -StreamingVideoAnnotationResults = _reflection.GeneratedProtocolMessageType( - "StreamingVideoAnnotationResults", - (_message.Message,), - { - "DESCRIPTOR": _STREAMINGVIDEOANNOTATIONRESULTS, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Streaming annotation results corresponding to a portion of the video - that is currently being processed. - - - Attributes: - shot_annotations: - Shot annotation results. Each shot is represented as a video - segment. - label_annotations: - Label annotation results. - explicit_annotation: - Explicit content annotation results. - object_annotations: - Object tracking results. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.StreamingVideoAnnotationResults) - }, -) -_sym_db.RegisterMessage(StreamingVideoAnnotationResults) - -StreamingShotChangeDetectionConfig = _reflection.GeneratedProtocolMessageType( - "StreamingShotChangeDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _STREAMINGSHOTCHANGEDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Config for STREAMING_SHOT_CHANGE_DETECTION.""", - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.StreamingShotChangeDetectionConfig) - }, -) -_sym_db.RegisterMessage(StreamingShotChangeDetectionConfig) - -StreamingLabelDetectionConfig = _reflection.GeneratedProtocolMessageType( - "StreamingLabelDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _STREAMINGLABELDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Config for STREAMING_LABEL_DETECTION. - - - Attributes: - stationary_camera: - Whether the video has been captured from a stationary - (i.e. non-moving) camera. When set to true, might improve - detection accuracy for moving objects. Default: false. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.StreamingLabelDetectionConfig) - }, -) -_sym_db.RegisterMessage(StreamingLabelDetectionConfig) - -StreamingExplicitContentDetectionConfig = _reflection.GeneratedProtocolMessageType( - "StreamingExplicitContentDetectionConfig", - (_message.Message,), - { - "DESCRIPTOR": _STREAMINGEXPLICITCONTENTDETECTIONCONFIG, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Config for STREAMING_EXPLICIT_CONTENT_DETECTION.""", - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.StreamingExplicitContentDetectionConfig) - }, -) -_sym_db.RegisterMessage(StreamingExplicitContentDetectionConfig) - -StreamingObjectTrackingConfig = _reflection.GeneratedProtocolMessageType( - "StreamingObjectTrackingConfig", - (_message.Message,), - { - "DESCRIPTOR": _STREAMINGOBJECTTRACKINGCONFIG, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Config for STREAMING_OBJECT_TRACKING.""", - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.StreamingObjectTrackingConfig) - }, -) -_sym_db.RegisterMessage(StreamingObjectTrackingConfig) - -StreamingAutomlActionRecognitionConfig = _reflection.GeneratedProtocolMessageType( - "StreamingAutomlActionRecognitionConfig", - (_message.Message,), - { - "DESCRIPTOR": _STREAMINGAUTOMLACTIONRECOGNITIONCONFIG, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Config for STREAMING_AUTOML_ACTION_RECOGNITION. - - - Attributes: - model_name: - Resource name of AutoML model. Format: ``projects/{project_id} - /locations/{location_id}/models/{model_id}`` - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.StreamingAutomlActionRecognitionConfig) - }, -) -_sym_db.RegisterMessage(StreamingAutomlActionRecognitionConfig) - -StreamingAutomlClassificationConfig = _reflection.GeneratedProtocolMessageType( - "StreamingAutomlClassificationConfig", - (_message.Message,), - { - "DESCRIPTOR": _STREAMINGAUTOMLCLASSIFICATIONCONFIG, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Config for STREAMING_AUTOML_CLASSIFICATION. - - - Attributes: - model_name: - Resource name of AutoML model. Format: ``projects/{project_num - ber}/locations/{location_id}/models/{model_id}`` - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.StreamingAutomlClassificationConfig) - }, -) -_sym_db.RegisterMessage(StreamingAutomlClassificationConfig) - -StreamingAutomlObjectTrackingConfig = _reflection.GeneratedProtocolMessageType( - "StreamingAutomlObjectTrackingConfig", - (_message.Message,), - { - "DESCRIPTOR": _STREAMINGAUTOMLOBJECTTRACKINGCONFIG, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Config for STREAMING_AUTOML_OBJECT_TRACKING. - - - Attributes: - model_name: - Resource name of AutoML model. Format: ``projects/{project_id} - /locations/{location_id}/models/{model_id}`` - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.StreamingAutomlObjectTrackingConfig) - }, -) -_sym_db.RegisterMessage(StreamingAutomlObjectTrackingConfig) - -StreamingStorageConfig = _reflection.GeneratedProtocolMessageType( - "StreamingStorageConfig", - (_message.Message,), - { - "DESCRIPTOR": _STREAMINGSTORAGECONFIG, - "__module__": "google.cloud.videointelligence_v1p3beta1.proto.video_intelligence_pb2", - "__doc__": """Config for streaming storage option. - - - Attributes: - enable_storage_annotation_result: - Enable streaming storage. Default: false. - annotation_result_storage_directory: - Cloud Storage URI to store all annotation results for one - client. Client should specify this field as the top-level - storage directory. Annotation results of different sessions - will be put into different sub-directories denoted by - project_name and session_id. All sub-directories will be auto - generated by program and will be made accessible to client in - response proto. URIs must be specified in the following - format: ``gs://bucket-id/object-id`` ``bucket-id`` should be a - valid Cloud Storage bucket created by client and bucket - permission shall also be configured properly. ``object-id`` - can be arbitrary string that make sense to client. Other URI - formats will return error and cause Cloud Storage write - failure. - """, - # @@protoc_insertion_point(class_scope:google.cloud.videointelligence.v1p3beta1.StreamingStorageConfig) - }, -) -_sym_db.RegisterMessage(StreamingStorageConfig) - - -DESCRIPTOR._options = None -_ANNOTATEVIDEOREQUEST.fields_by_name["features"]._options = None -_ANNOTATEVIDEOREQUEST.fields_by_name["output_uri"]._options = None -_ANNOTATEVIDEOREQUEST.fields_by_name["location_id"]._options = None -_TIMESTAMPEDOBJECT.fields_by_name["attributes"]._options = None -_TIMESTAMPEDOBJECT.fields_by_name["landmarks"]._options = None -_TRACK.fields_by_name["attributes"]._options = None -_TRACK.fields_by_name["confidence"]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name["language_code"]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name["max_alternatives"]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name["filter_profanity"]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name["speech_contexts"]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name[ - "enable_automatic_punctuation" -]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name["audio_tracks"]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name["enable_speaker_diarization"]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name["diarization_speaker_count"]._options = None -_SPEECHTRANSCRIPTIONCONFIG.fields_by_name["enable_word_confidence"]._options = None -_SPEECHCONTEXT.fields_by_name["phrases"]._options = None -_SPEECHTRANSCRIPTION.fields_by_name["language_code"]._options = None -_SPEECHRECOGNITIONALTERNATIVE.fields_by_name["confidence"]._options = None -_SPEECHRECOGNITIONALTERNATIVE.fields_by_name["words"]._options = None -_WORDINFO.fields_by_name["confidence"]._options = None -_WORDINFO.fields_by_name["speaker_tag"]._options = None - -_VIDEOINTELLIGENCESERVICE = _descriptor.ServiceDescriptor( - name="VideoIntelligenceService", - full_name="google.cloud.videointelligence.v1p3beta1.VideoIntelligenceService", - file=DESCRIPTOR, - index=0, - serialized_options=b"\312A videointelligence.googleapis.com\322A.https://www.googleapis.com/auth/cloud-platform", - create_key=_descriptor._internal_create_key, - serialized_start=12068, - serialized_end=12402, - methods=[ - _descriptor.MethodDescriptor( - name="AnnotateVideo", - full_name="google.cloud.videointelligence.v1p3beta1.VideoIntelligenceService.AnnotateVideo", - index=0, - containing_service=None, - input_type=_ANNOTATEVIDEOREQUEST, - output_type=google_dot_longrunning_dot_operations__pb2._OPERATION, - serialized_options=b'\202\323\344\223\002\037"\032/v1p3beta1/videos:annotate:\001*\332A\022input_uri,features\312A.\n\025AnnotateVideoResponse\022\025AnnotateVideoProgress', - create_key=_descriptor._internal_create_key, - ), - ], -) -_sym_db.RegisterServiceDescriptor(_VIDEOINTELLIGENCESERVICE) - -DESCRIPTOR.services_by_name["VideoIntelligenceService"] = _VIDEOINTELLIGENCESERVICE - - -_STREAMINGVIDEOINTELLIGENCESERVICE = _descriptor.ServiceDescriptor( - name="StreamingVideoIntelligenceService", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingVideoIntelligenceService", - file=DESCRIPTOR, - index=1, - serialized_options=b"\312A videointelligence.googleapis.com\322A.https://www.googleapis.com/auth/cloud-platform", - create_key=_descriptor._internal_create_key, - serialized_start=12405, - serialized_end=12706, - methods=[ - _descriptor.MethodDescriptor( - name="StreamingAnnotateVideo", - full_name="google.cloud.videointelligence.v1p3beta1.StreamingVideoIntelligenceService.StreamingAnnotateVideo", - index=0, - containing_service=None, - input_type=_STREAMINGANNOTATEVIDEOREQUEST, - output_type=_STREAMINGANNOTATEVIDEORESPONSE, - serialized_options=None, - create_key=_descriptor._internal_create_key, - ), - ], -) -_sym_db.RegisterServiceDescriptor(_STREAMINGVIDEOINTELLIGENCESERVICE) - -DESCRIPTOR.services_by_name[ - "StreamingVideoIntelligenceService" -] = _STREAMINGVIDEOINTELLIGENCESERVICE - -# @@protoc_insertion_point(module_scope) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/proto/video_intelligence_pb2_grpc.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/proto/video_intelligence_pb2_grpc.py deleted file mode 100644 index 9ba8b9e7fc91..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/proto/video_intelligence_pb2_grpc.py +++ /dev/null @@ -1,173 +0,0 @@ -# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT! -"""Client and server classes corresponding to protobuf-defined services.""" -import grpc - -from google.cloud.videointelligence_v1p3beta1.proto import ( - video_intelligence_pb2 as google_dot_cloud_dot_videointelligence__v1p3beta1_dot_proto_dot_video__intelligence__pb2, -) -from google.longrunning import ( - operations_pb2 as google_dot_longrunning_dot_operations__pb2, -) - - -class VideoIntelligenceServiceStub(object): - """Service that implements the Video Intelligence API. - """ - - def __init__(self, channel): - """Constructor. - - Args: - channel: A grpc.Channel. - """ - self.AnnotateVideo = channel.unary_unary( - "/google.cloud.videointelligence.v1p3beta1.VideoIntelligenceService/AnnotateVideo", - request_serializer=google_dot_cloud_dot_videointelligence__v1p3beta1_dot_proto_dot_video__intelligence__pb2.AnnotateVideoRequest.SerializeToString, - response_deserializer=google_dot_longrunning_dot_operations__pb2.Operation.FromString, - ) - - -class VideoIntelligenceServiceServicer(object): - """Service that implements the Video Intelligence API. - """ - - def AnnotateVideo(self, request, context): - """Performs asynchronous video annotation. Progress and results can be - retrieved through the `google.longrunning.Operations` interface. - `Operation.metadata` contains `AnnotateVideoProgress` (progress). - `Operation.response` contains `AnnotateVideoResponse` (results). - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - -def add_VideoIntelligenceServiceServicer_to_server(servicer, server): - rpc_method_handlers = { - "AnnotateVideo": grpc.unary_unary_rpc_method_handler( - servicer.AnnotateVideo, - request_deserializer=google_dot_cloud_dot_videointelligence__v1p3beta1_dot_proto_dot_video__intelligence__pb2.AnnotateVideoRequest.FromString, - response_serializer=google_dot_longrunning_dot_operations__pb2.Operation.SerializeToString, - ), - } - generic_handler = grpc.method_handlers_generic_handler( - "google.cloud.videointelligence.v1p3beta1.VideoIntelligenceService", - rpc_method_handlers, - ) - server.add_generic_rpc_handlers((generic_handler,)) - - -# This class is part of an EXPERIMENTAL API. -class VideoIntelligenceService(object): - """Service that implements the Video Intelligence API. - """ - - @staticmethod - def AnnotateVideo( - request, - target, - options=(), - channel_credentials=None, - call_credentials=None, - insecure=False, - compression=None, - wait_for_ready=None, - timeout=None, - metadata=None, - ): - return grpc.experimental.unary_unary( - request, - target, - "/google.cloud.videointelligence.v1p3beta1.VideoIntelligenceService/AnnotateVideo", - google_dot_cloud_dot_videointelligence__v1p3beta1_dot_proto_dot_video__intelligence__pb2.AnnotateVideoRequest.SerializeToString, - google_dot_longrunning_dot_operations__pb2.Operation.FromString, - options, - channel_credentials, - insecure, - call_credentials, - compression, - wait_for_ready, - timeout, - metadata, - ) - - -class StreamingVideoIntelligenceServiceStub(object): - """Service that implements streaming Video Intelligence API. - """ - - def __init__(self, channel): - """Constructor. - - Args: - channel: A grpc.Channel. - """ - self.StreamingAnnotateVideo = channel.stream_stream( - "/google.cloud.videointelligence.v1p3beta1.StreamingVideoIntelligenceService/StreamingAnnotateVideo", - request_serializer=google_dot_cloud_dot_videointelligence__v1p3beta1_dot_proto_dot_video__intelligence__pb2.StreamingAnnotateVideoRequest.SerializeToString, - response_deserializer=google_dot_cloud_dot_videointelligence__v1p3beta1_dot_proto_dot_video__intelligence__pb2.StreamingAnnotateVideoResponse.FromString, - ) - - -class StreamingVideoIntelligenceServiceServicer(object): - """Service that implements streaming Video Intelligence API. - """ - - def StreamingAnnotateVideo(self, request_iterator, context): - """Performs video annotation with bidirectional streaming: emitting results - while sending video/audio bytes. - This method is only available via the gRPC API (not REST). - """ - context.set_code(grpc.StatusCode.UNIMPLEMENTED) - context.set_details("Method not implemented!") - raise NotImplementedError("Method not implemented!") - - -def add_StreamingVideoIntelligenceServiceServicer_to_server(servicer, server): - rpc_method_handlers = { - "StreamingAnnotateVideo": grpc.stream_stream_rpc_method_handler( - servicer.StreamingAnnotateVideo, - request_deserializer=google_dot_cloud_dot_videointelligence__v1p3beta1_dot_proto_dot_video__intelligence__pb2.StreamingAnnotateVideoRequest.FromString, - response_serializer=google_dot_cloud_dot_videointelligence__v1p3beta1_dot_proto_dot_video__intelligence__pb2.StreamingAnnotateVideoResponse.SerializeToString, - ), - } - generic_handler = grpc.method_handlers_generic_handler( - "google.cloud.videointelligence.v1p3beta1.StreamingVideoIntelligenceService", - rpc_method_handlers, - ) - server.add_generic_rpc_handlers((generic_handler,)) - - -# This class is part of an EXPERIMENTAL API. -class StreamingVideoIntelligenceService(object): - """Service that implements streaming Video Intelligence API. - """ - - @staticmethod - def StreamingAnnotateVideo( - request_iterator, - target, - options=(), - channel_credentials=None, - call_credentials=None, - insecure=False, - compression=None, - wait_for_ready=None, - timeout=None, - metadata=None, - ): - return grpc.experimental.stream_stream( - request_iterator, - target, - "/google.cloud.videointelligence.v1p3beta1.StreamingVideoIntelligenceService/StreamingAnnotateVideo", - google_dot_cloud_dot_videointelligence__v1p3beta1_dot_proto_dot_video__intelligence__pb2.StreamingAnnotateVideoRequest.SerializeToString, - google_dot_cloud_dot_videointelligence__v1p3beta1_dot_proto_dot_video__intelligence__pb2.StreamingAnnotateVideoResponse.FromString, - options, - channel_credentials, - insecure, - call_credentials, - compression, - wait_for_ready, - timeout, - metadata, - ) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/py.typed b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/py.typed new file mode 100644 index 000000000000..e7fb166bf3e5 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/py.typed @@ -0,0 +1,2 @@ +# Marker file for PEP 561. +# The google-cloud-videointelligence package uses inline types. diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/__init__.py new file mode 100644 index 000000000000..42ffdf2bc43d --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/__init__.py @@ -0,0 +1,16 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/__init__.py new file mode 100644 index 000000000000..6cba0cfb2458 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/__init__.py @@ -0,0 +1,24 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from .client import StreamingVideoIntelligenceServiceClient +from .async_client import StreamingVideoIntelligenceServiceAsyncClient + +__all__ = ( + "StreamingVideoIntelligenceServiceClient", + "StreamingVideoIntelligenceServiceAsyncClient", +) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/async_client.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/async_client.py new file mode 100644 index 000000000000..f5b04c1965ac --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/async_client.py @@ -0,0 +1,239 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from collections import OrderedDict +import functools +import re +from typing import ( + Dict, + AsyncIterable, + Awaitable, + AsyncIterator, + Sequence, + Tuple, + Type, + Union, +) +import pkg_resources + +import google.api_core.client_options as ClientOptions # type: ignore +from google.api_core import exceptions # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google.api_core import retry as retries # type: ignore +from google.auth import credentials # type: ignore +from google.oauth2 import service_account # type: ignore + +from google.cloud.videointelligence_v1p3beta1.types import video_intelligence +from google.rpc import status_pb2 as status # type: ignore + +from .transports.base import ( + StreamingVideoIntelligenceServiceTransport, + DEFAULT_CLIENT_INFO, +) +from .transports.grpc_asyncio import ( + StreamingVideoIntelligenceServiceGrpcAsyncIOTransport, +) +from .client import StreamingVideoIntelligenceServiceClient + + +class StreamingVideoIntelligenceServiceAsyncClient: + """Service that implements streaming Video Intelligence API.""" + + _client: StreamingVideoIntelligenceServiceClient + + DEFAULT_ENDPOINT = StreamingVideoIntelligenceServiceClient.DEFAULT_ENDPOINT + DEFAULT_MTLS_ENDPOINT = ( + StreamingVideoIntelligenceServiceClient.DEFAULT_MTLS_ENDPOINT + ) + + common_billing_account_path = staticmethod( + StreamingVideoIntelligenceServiceClient.common_billing_account_path + ) + parse_common_billing_account_path = staticmethod( + StreamingVideoIntelligenceServiceClient.parse_common_billing_account_path + ) + + common_folder_path = staticmethod( + StreamingVideoIntelligenceServiceClient.common_folder_path + ) + parse_common_folder_path = staticmethod( + StreamingVideoIntelligenceServiceClient.parse_common_folder_path + ) + + common_organization_path = staticmethod( + StreamingVideoIntelligenceServiceClient.common_organization_path + ) + parse_common_organization_path = staticmethod( + StreamingVideoIntelligenceServiceClient.parse_common_organization_path + ) + + common_project_path = staticmethod( + StreamingVideoIntelligenceServiceClient.common_project_path + ) + parse_common_project_path = staticmethod( + StreamingVideoIntelligenceServiceClient.parse_common_project_path + ) + + common_location_path = staticmethod( + StreamingVideoIntelligenceServiceClient.common_location_path + ) + parse_common_location_path = staticmethod( + StreamingVideoIntelligenceServiceClient.parse_common_location_path + ) + + from_service_account_file = ( + StreamingVideoIntelligenceServiceClient.from_service_account_file + ) + from_service_account_json = from_service_account_file + + @property + def transport(self) -> StreamingVideoIntelligenceServiceTransport: + """Return the transport used by the client instance. + + Returns: + StreamingVideoIntelligenceServiceTransport: The transport used by the client instance. + """ + return self._client.transport + + get_transport_class = functools.partial( + type(StreamingVideoIntelligenceServiceClient).get_transport_class, + type(StreamingVideoIntelligenceServiceClient), + ) + + def __init__( + self, + *, + credentials: credentials.Credentials = None, + transport: Union[ + str, StreamingVideoIntelligenceServiceTransport + ] = "grpc_asyncio", + client_options: ClientOptions = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the streaming video intelligence service client. + + Args: + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + transport (Union[str, ~.StreamingVideoIntelligenceServiceTransport]): The + transport to use. If set to None, a transport is chosen + automatically. + client_options (ClientOptions): Custom options for the client. It + won't take effect if a ``transport`` instance is provided. + (1) The ``api_endpoint`` property can be used to override the + default endpoint provided by the client. GOOGLE_API_USE_MTLS_ENDPOINT + environment variable can also be used to override the endpoint: + "always" (always use the default mTLS endpoint), "never" (always + use the default regular endpoint) and "auto" (auto switch to the + default mTLS endpoint if client certificate is present, this is + the default value). However, the ``api_endpoint`` property takes + precedence if provided. + (2) If GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable + is "true", then the ``client_cert_source`` property can be used + to provide client certificate for mutual TLS transport. If + not provided, the default SSL client certificate will be used if + present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not + set, no client certificate will be used. + + Raises: + google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport + creation failed for any reason. + """ + + self._client = StreamingVideoIntelligenceServiceClient( + credentials=credentials, + transport=transport, + client_options=client_options, + client_info=client_info, + ) + + def streaming_annotate_video( + self, + requests: AsyncIterator[ + video_intelligence.StreamingAnnotateVideoRequest + ] = None, + *, + retry: retries.Retry = gapic_v1.method.DEFAULT, + timeout: float = None, + metadata: Sequence[Tuple[str, str]] = (), + ) -> Awaitable[AsyncIterable[video_intelligence.StreamingAnnotateVideoResponse]]: + r"""Performs video annotation with bidirectional + streaming: emitting results while sending video/audio + bytes. This method is only available via the gRPC API + (not REST). + + Args: + requests (AsyncIterator[`~.video_intelligence.StreamingAnnotateVideoRequest`]): + The request object AsyncIterator. The top-level message sent by the + client for the `StreamingAnnotateVideo` method. Multiple + `StreamingAnnotateVideoRequest` messages are sent. The + first message must only contain a `StreamingVideoConfig` + message. All subsequent messages must only contain + `input_content` data. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, str]]): Strings which should be + sent along with the request as metadata. + + Returns: + AsyncIterable[~.video_intelligence.StreamingAnnotateVideoResponse]: + ``StreamingAnnotateVideoResponse`` is the only message + returned to the client by ``StreamingAnnotateVideo``. A + series of zero or more + ``StreamingAnnotateVideoResponse`` messages are streamed + back to the client. + + """ + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = gapic_v1.method_async.wrap_method( + self._client._transport.streaming_annotate_video, + default_retry=retries.Retry( + initial=0.1, + maximum=60.0, + multiplier=1.3, + predicate=retries.if_exception_type( + exceptions.DeadlineExceeded, exceptions.ServiceUnavailable, + ), + ), + default_timeout=10800.0, + client_info=DEFAULT_CLIENT_INFO, + ) + + # Send the request. + response = rpc(requests, retry=retry, timeout=timeout, metadata=metadata,) + + # Done; return the response. + return response + + +try: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=pkg_resources.get_distribution( + "google-cloud-videointelligence", + ).version, + ) +except pkg_resources.DistributionNotFound: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo() + + +__all__ = ("StreamingVideoIntelligenceServiceAsyncClient",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/client.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/client.py new file mode 100644 index 000000000000..48eb476b0e6c --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/client.py @@ -0,0 +1,397 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from collections import OrderedDict +from distutils import util +import os +import re +from typing import ( + Callable, + Dict, + Optional, + Iterable, + Iterator, + Sequence, + Tuple, + Type, + Union, +) +import pkg_resources + +from google.api_core import client_options as client_options_lib # type: ignore +from google.api_core import exceptions # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google.api_core import retry as retries # type: ignore +from google.auth import credentials # type: ignore +from google.auth.transport import mtls # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore +from google.auth.exceptions import MutualTLSChannelError # type: ignore +from google.oauth2 import service_account # type: ignore + +from google.cloud.videointelligence_v1p3beta1.types import video_intelligence +from google.rpc import status_pb2 as status # type: ignore + +from .transports.base import ( + StreamingVideoIntelligenceServiceTransport, + DEFAULT_CLIENT_INFO, +) +from .transports.grpc import StreamingVideoIntelligenceServiceGrpcTransport +from .transports.grpc_asyncio import ( + StreamingVideoIntelligenceServiceGrpcAsyncIOTransport, +) + + +class StreamingVideoIntelligenceServiceClientMeta(type): + """Metaclass for the StreamingVideoIntelligenceService client. + + This provides class-level methods for building and retrieving + support objects (e.g. transport) without polluting the client instance + objects. + """ + + _transport_registry = ( + OrderedDict() + ) # type: Dict[str, Type[StreamingVideoIntelligenceServiceTransport]] + _transport_registry["grpc"] = StreamingVideoIntelligenceServiceGrpcTransport + _transport_registry[ + "grpc_asyncio" + ] = StreamingVideoIntelligenceServiceGrpcAsyncIOTransport + + def get_transport_class( + cls, label: str = None, + ) -> Type[StreamingVideoIntelligenceServiceTransport]: + """Return an appropriate transport class. + + Args: + label: The name of the desired transport. If none is + provided, then the first transport in the registry is used. + + Returns: + The transport class to use. + """ + # If a specific transport is requested, return that one. + if label: + return cls._transport_registry[label] + + # No transport is requested; return the default (that is, the first one + # in the dictionary). + return next(iter(cls._transport_registry.values())) + + +class StreamingVideoIntelligenceServiceClient( + metaclass=StreamingVideoIntelligenceServiceClientMeta +): + """Service that implements streaming Video Intelligence API.""" + + @staticmethod + def _get_default_mtls_endpoint(api_endpoint): + """Convert api endpoint to mTLS endpoint. + Convert "*.sandbox.googleapis.com" and "*.googleapis.com" to + "*.mtls.sandbox.googleapis.com" and "*.mtls.googleapis.com" respectively. + Args: + api_endpoint (Optional[str]): the api endpoint to convert. + Returns: + str: converted mTLS api endpoint. + """ + if not api_endpoint: + return api_endpoint + + mtls_endpoint_re = re.compile( + r"(?P[^.]+)(?P\.mtls)?(?P\.sandbox)?(?P\.googleapis\.com)?" + ) + + m = mtls_endpoint_re.match(api_endpoint) + name, mtls, sandbox, googledomain = m.groups() + if mtls or not googledomain: + return api_endpoint + + if sandbox: + return api_endpoint.replace( + "sandbox.googleapis.com", "mtls.sandbox.googleapis.com" + ) + + return api_endpoint.replace(".googleapis.com", ".mtls.googleapis.com") + + DEFAULT_ENDPOINT = "videointelligence.googleapis.com" + DEFAULT_MTLS_ENDPOINT = _get_default_mtls_endpoint.__func__( # type: ignore + DEFAULT_ENDPOINT + ) + + @classmethod + def from_service_account_file(cls, filename: str, *args, **kwargs): + """Creates an instance of this client using the provided credentials + file. + + Args: + filename (str): The path to the service account private key json + file. + args: Additional arguments to pass to the constructor. + kwargs: Additional arguments to pass to the constructor. + + Returns: + {@api.name}: The constructed client. + """ + credentials = service_account.Credentials.from_service_account_file(filename) + kwargs["credentials"] = credentials + return cls(*args, **kwargs) + + from_service_account_json = from_service_account_file + + @property + def transport(self) -> StreamingVideoIntelligenceServiceTransport: + """Return the transport used by the client instance. + + Returns: + StreamingVideoIntelligenceServiceTransport: The transport used by the client instance. + """ + return self._transport + + @staticmethod + def common_billing_account_path(billing_account: str,) -> str: + """Return a fully-qualified billing_account string.""" + return "billingAccounts/{billing_account}".format( + billing_account=billing_account, + ) + + @staticmethod + def parse_common_billing_account_path(path: str) -> Dict[str, str]: + """Parse a billing_account path into its component segments.""" + m = re.match(r"^billingAccounts/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_folder_path(folder: str,) -> str: + """Return a fully-qualified folder string.""" + return "folders/{folder}".format(folder=folder,) + + @staticmethod + def parse_common_folder_path(path: str) -> Dict[str, str]: + """Parse a folder path into its component segments.""" + m = re.match(r"^folders/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_organization_path(organization: str,) -> str: + """Return a fully-qualified organization string.""" + return "organizations/{organization}".format(organization=organization,) + + @staticmethod + def parse_common_organization_path(path: str) -> Dict[str, str]: + """Parse a organization path into its component segments.""" + m = re.match(r"^organizations/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_project_path(project: str,) -> str: + """Return a fully-qualified project string.""" + return "projects/{project}".format(project=project,) + + @staticmethod + def parse_common_project_path(path: str) -> Dict[str, str]: + """Parse a project path into its component segments.""" + m = re.match(r"^projects/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_location_path(project: str, location: str,) -> str: + """Return a fully-qualified location string.""" + return "projects/{project}/locations/{location}".format( + project=project, location=location, + ) + + @staticmethod + def parse_common_location_path(path: str) -> Dict[str, str]: + """Parse a location path into its component segments.""" + m = re.match(r"^projects/(?P.+?)/locations/(?P.+?)$", path) + return m.groupdict() if m else {} + + def __init__( + self, + *, + credentials: Optional[credentials.Credentials] = None, + transport: Union[str, StreamingVideoIntelligenceServiceTransport, None] = None, + client_options: Optional[client_options_lib.ClientOptions] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the streaming video intelligence service client. + + Args: + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + transport (Union[str, ~.StreamingVideoIntelligenceServiceTransport]): The + transport to use. If set to None, a transport is chosen + automatically. + client_options (client_options_lib.ClientOptions): Custom options for the + client. It won't take effect if a ``transport`` instance is provided. + (1) The ``api_endpoint`` property can be used to override the + default endpoint provided by the client. GOOGLE_API_USE_MTLS_ENDPOINT + environment variable can also be used to override the endpoint: + "always" (always use the default mTLS endpoint), "never" (always + use the default regular endpoint) and "auto" (auto switch to the + default mTLS endpoint if client certificate is present, this is + the default value). However, the ``api_endpoint`` property takes + precedence if provided. + (2) If GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable + is "true", then the ``client_cert_source`` property can be used + to provide client certificate for mutual TLS transport. If + not provided, the default SSL client certificate will be used if + present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not + set, no client certificate will be used. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If mutual TLS transport + creation failed for any reason. + """ + if isinstance(client_options, dict): + client_options = client_options_lib.from_dict(client_options) + if client_options is None: + client_options = client_options_lib.ClientOptions() + + # Create SSL credentials for mutual TLS if needed. + use_client_cert = bool( + util.strtobool(os.getenv("GOOGLE_API_USE_CLIENT_CERTIFICATE", "false")) + ) + + ssl_credentials = None + is_mtls = False + if use_client_cert: + if client_options.client_cert_source: + import grpc # type: ignore + + cert, key = client_options.client_cert_source() + ssl_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + is_mtls = True + else: + creds = SslCredentials() + is_mtls = creds.is_mtls + ssl_credentials = creds.ssl_credentials if is_mtls else None + + # Figure out which api endpoint to use. + if client_options.api_endpoint is not None: + api_endpoint = client_options.api_endpoint + else: + use_mtls_env = os.getenv("GOOGLE_API_USE_MTLS_ENDPOINT", "auto") + if use_mtls_env == "never": + api_endpoint = self.DEFAULT_ENDPOINT + elif use_mtls_env == "always": + api_endpoint = self.DEFAULT_MTLS_ENDPOINT + elif use_mtls_env == "auto": + api_endpoint = ( + self.DEFAULT_MTLS_ENDPOINT if is_mtls else self.DEFAULT_ENDPOINT + ) + else: + raise MutualTLSChannelError( + "Unsupported GOOGLE_API_USE_MTLS_ENDPOINT value. Accepted values: never, auto, always" + ) + + # Save or instantiate the transport. + # Ordinarily, we provide the transport, but allowing a custom transport + # instance provides an extensibility point for unusual situations. + if isinstance(transport, StreamingVideoIntelligenceServiceTransport): + # transport is a StreamingVideoIntelligenceServiceTransport instance. + if credentials or client_options.credentials_file: + raise ValueError( + "When providing a transport instance, " + "provide its credentials directly." + ) + if client_options.scopes: + raise ValueError( + "When providing a transport instance, " + "provide its scopes directly." + ) + self._transport = transport + else: + Transport = type(self).get_transport_class(transport) + self._transport = Transport( + credentials=credentials, + credentials_file=client_options.credentials_file, + host=api_endpoint, + scopes=client_options.scopes, + ssl_channel_credentials=ssl_credentials, + quota_project_id=client_options.quota_project_id, + client_info=client_info, + ) + + def streaming_annotate_video( + self, + requests: Iterator[video_intelligence.StreamingAnnotateVideoRequest] = None, + *, + retry: retries.Retry = gapic_v1.method.DEFAULT, + timeout: float = None, + metadata: Sequence[Tuple[str, str]] = (), + ) -> Iterable[video_intelligence.StreamingAnnotateVideoResponse]: + r"""Performs video annotation with bidirectional + streaming: emitting results while sending video/audio + bytes. This method is only available via the gRPC API + (not REST). + + Args: + requests (Iterator[`~.video_intelligence.StreamingAnnotateVideoRequest`]): + The request object iterator. The top-level message sent by the + client for the `StreamingAnnotateVideo` method. Multiple + `StreamingAnnotateVideoRequest` messages are sent. The + first message must only contain a `StreamingVideoConfig` + message. All subsequent messages must only contain + `input_content` data. + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, str]]): Strings which should be + sent along with the request as metadata. + + Returns: + Iterable[~.video_intelligence.StreamingAnnotateVideoResponse]: + ``StreamingAnnotateVideoResponse`` is the only message + returned to the client by ``StreamingAnnotateVideo``. A + series of zero or more + ``StreamingAnnotateVideoResponse`` messages are streamed + back to the client. + + """ + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.streaming_annotate_video] + + # Send the request. + response = rpc(requests, retry=retry, timeout=timeout, metadata=metadata,) + + # Done; return the response. + return response + + +try: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=pkg_resources.get_distribution( + "google-cloud-videointelligence", + ).version, + ) +except pkg_resources.DistributionNotFound: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo() + + +__all__ = ("StreamingVideoIntelligenceServiceClient",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/transports/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/transports/__init__.py new file mode 100644 index 000000000000..e65094b04ad7 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/transports/__init__.py @@ -0,0 +1,40 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from collections import OrderedDict +from typing import Dict, Type + +from .base import StreamingVideoIntelligenceServiceTransport +from .grpc import StreamingVideoIntelligenceServiceGrpcTransport +from .grpc_asyncio import StreamingVideoIntelligenceServiceGrpcAsyncIOTransport + + +# Compile a registry of transports. +_transport_registry = ( + OrderedDict() +) # type: Dict[str, Type[StreamingVideoIntelligenceServiceTransport]] +_transport_registry["grpc"] = StreamingVideoIntelligenceServiceGrpcTransport +_transport_registry[ + "grpc_asyncio" +] = StreamingVideoIntelligenceServiceGrpcAsyncIOTransport + + +__all__ = ( + "StreamingVideoIntelligenceServiceTransport", + "StreamingVideoIntelligenceServiceGrpcTransport", + "StreamingVideoIntelligenceServiceGrpcAsyncIOTransport", +) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/transports/base.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/transports/base.py new file mode 100644 index 000000000000..62df02325a9b --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/transports/base.py @@ -0,0 +1,137 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import abc +import typing +import pkg_resources + +from google import auth # type: ignore +from google.api_core import exceptions # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google.api_core import retry as retries # type: ignore +from google.auth import credentials # type: ignore + +from google.cloud.videointelligence_v1p3beta1.types import video_intelligence + + +try: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=pkg_resources.get_distribution( + "google-cloud-videointelligence", + ).version, + ) +except pkg_resources.DistributionNotFound: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo() + + +class StreamingVideoIntelligenceServiceTransport(abc.ABC): + """Abstract transport class for StreamingVideoIntelligenceService.""" + + AUTH_SCOPES = ("https://www.googleapis.com/auth/cloud-platform",) + + def __init__( + self, + *, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: typing.Optional[str] = None, + scopes: typing.Optional[typing.Sequence[str]] = AUTH_SCOPES, + quota_project_id: typing.Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + **kwargs, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): The hostname to connect to. + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is mutually exclusive with credentials. + scope (Optional[Sequence[str]]): A list of scopes. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + """ + # Save the hostname. Default to port 443 (HTTPS) if none is specified. + if ":" not in host: + host += ":443" + self._host = host + + # If no credentials are provided, then determine the appropriate + # defaults. + if credentials and credentials_file: + raise exceptions.DuplicateCredentialArgs( + "'credentials_file' and 'credentials' are mutually exclusive" + ) + + if credentials_file is not None: + credentials, _ = auth.load_credentials_from_file( + credentials_file, scopes=scopes, quota_project_id=quota_project_id + ) + + elif credentials is None: + credentials, _ = auth.default( + scopes=scopes, quota_project_id=quota_project_id + ) + + # Save the credentials. + self._credentials = credentials + + # Lifted into its own function so it can be stubbed out during tests. + self._prep_wrapped_messages(client_info) + + def _prep_wrapped_messages(self, client_info): + # Precompute the wrapped methods. + self._wrapped_methods = { + self.streaming_annotate_video: gapic_v1.method.wrap_method( + self.streaming_annotate_video, + default_retry=retries.Retry( + initial=0.1, + maximum=60.0, + multiplier=1.3, + predicate=retries.if_exception_type( + exceptions.DeadlineExceeded, exceptions.ServiceUnavailable, + ), + ), + default_timeout=10800.0, + client_info=client_info, + ), + } + + @property + def streaming_annotate_video( + self, + ) -> typing.Callable[ + [video_intelligence.StreamingAnnotateVideoRequest], + typing.Union[ + video_intelligence.StreamingAnnotateVideoResponse, + typing.Awaitable[video_intelligence.StreamingAnnotateVideoResponse], + ], + ]: + raise NotImplementedError() + + +__all__ = ("StreamingVideoIntelligenceServiceTransport",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/transports/grpc.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/transports/grpc.py new file mode 100644 index 000000000000..707321758e57 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/transports/grpc.py @@ -0,0 +1,267 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import warnings +from typing import Callable, Dict, Optional, Sequence, Tuple + +from google.api_core import grpc_helpers # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google import auth # type: ignore +from google.auth import credentials # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore + +import grpc # type: ignore + +from google.cloud.videointelligence_v1p3beta1.types import video_intelligence + +from .base import StreamingVideoIntelligenceServiceTransport, DEFAULT_CLIENT_INFO + + +class StreamingVideoIntelligenceServiceGrpcTransport( + StreamingVideoIntelligenceServiceTransport +): + """gRPC backend transport for StreamingVideoIntelligenceService. + + Service that implements streaming Video Intelligence API. + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends protocol buffers over the wire using gRPC (which is built on + top of HTTP/2); the ``grpcio`` package must be installed. + """ + + _stubs: Dict[str, Callable] + + def __init__( + self, + *, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: str = None, + scopes: Sequence[str] = None, + channel: grpc.Channel = None, + api_mtls_endpoint: str = None, + client_cert_source: Callable[[], Tuple[bytes, bytes]] = None, + ssl_channel_credentials: grpc.ChannelCredentials = None, + quota_project_id: Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): The hostname to connect to. + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + This argument is ignored if ``channel`` is provided. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if ``channel`` is provided. + scopes (Optional(Sequence[str])): A list of scopes. This argument is + ignored if ``channel`` is provided. + channel (Optional[grpc.Channel]): A ``Channel`` instance through + which to make calls. + api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint. + If provided, it overrides the ``host`` argument and tries to create + a mutual TLS channel with client SSL credentials from + ``client_cert_source`` or applicatin default SSL credentials. + client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]): + Deprecated. A callback to provide client SSL certificate bytes and + private key bytes, both in PEM format. It is ignored if + ``api_mtls_endpoint`` is None. + ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials + for grpc channel. It is ignored if ``channel`` is provided. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If mutual TLS transport + creation failed for any reason. + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + self._ssl_channel_credentials = ssl_channel_credentials + + if channel: + # Sanity check: Ensure that channel and credentials are not both + # provided. + credentials = False + + # If a channel was explicitly provided, set it. + self._grpc_channel = channel + self._ssl_channel_credentials = None + elif api_mtls_endpoint: + warnings.warn( + "api_mtls_endpoint and client_cert_source are deprecated", + DeprecationWarning, + ) + + host = ( + api_mtls_endpoint + if ":" in api_mtls_endpoint + else api_mtls_endpoint + ":443" + ) + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # Create SSL credentials with client_cert_source or application + # default SSL credentials. + if client_cert_source: + cert, key = client_cert_source() + ssl_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + else: + ssl_credentials = SslCredentials().ssl_credentials + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + self._ssl_channel_credentials = ssl_credentials + else: + host = host if ":" in host else host + ":443" + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_channel_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + + self._stubs = {} # type: Dict[str, Callable] + + # Run the base constructor. + super().__init__( + host=host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + client_info=client_info, + ) + + @classmethod + def create_channel( + cls, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: str = None, + scopes: Optional[Sequence[str]] = None, + quota_project_id: Optional[str] = None, + **kwargs, + ) -> grpc.Channel: + """Create and return a gRPC channel object. + Args: + address (Optionsl[str]): The host for the channel to use. + credentials (Optional[~.Credentials]): The + authorization credentials to attach to requests. These + credentials identify this application to the service. If + none are specified, the client will attempt to ascertain + the credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is mutually exclusive with credentials. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + kwargs (Optional[dict]): Keyword arguments, which are passed to the + channel creation. + Returns: + grpc.Channel: A gRPC channel object. + + Raises: + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + scopes = scopes or cls.AUTH_SCOPES + return grpc_helpers.create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes, + quota_project_id=quota_project_id, + **kwargs, + ) + + @property + def grpc_channel(self) -> grpc.Channel: + """Return the channel designed to connect to this service. + """ + return self._grpc_channel + + @property + def streaming_annotate_video( + self, + ) -> Callable[ + [video_intelligence.StreamingAnnotateVideoRequest], + video_intelligence.StreamingAnnotateVideoResponse, + ]: + r"""Return a callable for the streaming annotate video method over gRPC. + + Performs video annotation with bidirectional + streaming: emitting results while sending video/audio + bytes. This method is only available via the gRPC API + (not REST). + + Returns: + Callable[[~.StreamingAnnotateVideoRequest], + ~.StreamingAnnotateVideoResponse]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "streaming_annotate_video" not in self._stubs: + self._stubs["streaming_annotate_video"] = self.grpc_channel.stream_stream( + "/google.cloud.videointelligence.v1p3beta1.StreamingVideoIntelligenceService/StreamingAnnotateVideo", + request_serializer=video_intelligence.StreamingAnnotateVideoRequest.serialize, + response_deserializer=video_intelligence.StreamingAnnotateVideoResponse.deserialize, + ) + return self._stubs["streaming_annotate_video"] + + +__all__ = ("StreamingVideoIntelligenceServiceGrpcTransport",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/transports/grpc_asyncio.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/transports/grpc_asyncio.py new file mode 100644 index 000000000000..f692d3101618 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/streaming_video_intelligence_service/transports/grpc_asyncio.py @@ -0,0 +1,271 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import warnings +from typing import Awaitable, Callable, Dict, Optional, Sequence, Tuple + +from google.api_core import gapic_v1 # type: ignore +from google.api_core import grpc_helpers_async # type: ignore +from google import auth # type: ignore +from google.auth import credentials # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore + +import grpc # type: ignore +from grpc.experimental import aio # type: ignore + +from google.cloud.videointelligence_v1p3beta1.types import video_intelligence + +from .base import StreamingVideoIntelligenceServiceTransport, DEFAULT_CLIENT_INFO +from .grpc import StreamingVideoIntelligenceServiceGrpcTransport + + +class StreamingVideoIntelligenceServiceGrpcAsyncIOTransport( + StreamingVideoIntelligenceServiceTransport +): + """gRPC AsyncIO backend transport for StreamingVideoIntelligenceService. + + Service that implements streaming Video Intelligence API. + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends protocol buffers over the wire using gRPC (which is built on + top of HTTP/2); the ``grpcio`` package must be installed. + """ + + _grpc_channel: aio.Channel + _stubs: Dict[str, Callable] = {} + + @classmethod + def create_channel( + cls, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + quota_project_id: Optional[str] = None, + **kwargs, + ) -> aio.Channel: + """Create and return a gRPC AsyncIO channel object. + Args: + address (Optional[str]): The host for the channel to use. + credentials (Optional[~.Credentials]): The + authorization credentials to attach to requests. These + credentials identify this application to the service. If + none are specified, the client will attempt to ascertain + the credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if ``channel`` is provided. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + kwargs (Optional[dict]): Keyword arguments, which are passed to the + channel creation. + Returns: + aio.Channel: A gRPC AsyncIO channel object. + """ + scopes = scopes or cls.AUTH_SCOPES + return grpc_helpers_async.create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes, + quota_project_id=quota_project_id, + **kwargs, + ) + + def __init__( + self, + *, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + channel: aio.Channel = None, + api_mtls_endpoint: str = None, + client_cert_source: Callable[[], Tuple[bytes, bytes]] = None, + ssl_channel_credentials: grpc.ChannelCredentials = None, + quota_project_id=None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): The hostname to connect to. + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + This argument is ignored if ``channel`` is provided. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if ``channel`` is provided. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + channel (Optional[aio.Channel]): A ``Channel`` instance through + which to make calls. + api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint. + If provided, it overrides the ``host`` argument and tries to create + a mutual TLS channel with client SSL credentials from + ``client_cert_source`` or applicatin default SSL credentials. + client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]): + Deprecated. A callback to provide client SSL certificate bytes and + private key bytes, both in PEM format. It is ignored if + ``api_mtls_endpoint`` is None. + ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials + for grpc channel. It is ignored if ``channel`` is provided. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport + creation failed for any reason. + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + self._ssl_channel_credentials = ssl_channel_credentials + + if channel: + # Sanity check: Ensure that channel and credentials are not both + # provided. + credentials = False + + # If a channel was explicitly provided, set it. + self._grpc_channel = channel + self._ssl_channel_credentials = None + elif api_mtls_endpoint: + warnings.warn( + "api_mtls_endpoint and client_cert_source are deprecated", + DeprecationWarning, + ) + + host = ( + api_mtls_endpoint + if ":" in api_mtls_endpoint + else api_mtls_endpoint + ":443" + ) + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # Create SSL credentials with client_cert_source or application + # default SSL credentials. + if client_cert_source: + cert, key = client_cert_source() + ssl_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + else: + ssl_credentials = SslCredentials().ssl_credentials + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + self._ssl_channel_credentials = ssl_credentials + else: + host = host if ":" in host else host + ":443" + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_channel_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + + # Run the base constructor. + super().__init__( + host=host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + client_info=client_info, + ) + + self._stubs = {} + + @property + def grpc_channel(self) -> aio.Channel: + """Create the channel designed to connect to this service. + + This property caches on the instance; repeated calls return + the same channel. + """ + # Return the channel from cache. + return self._grpc_channel + + @property + def streaming_annotate_video( + self, + ) -> Callable[ + [video_intelligence.StreamingAnnotateVideoRequest], + Awaitable[video_intelligence.StreamingAnnotateVideoResponse], + ]: + r"""Return a callable for the streaming annotate video method over gRPC. + + Performs video annotation with bidirectional + streaming: emitting results while sending video/audio + bytes. This method is only available via the gRPC API + (not REST). + + Returns: + Callable[[~.StreamingAnnotateVideoRequest], + Awaitable[~.StreamingAnnotateVideoResponse]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "streaming_annotate_video" not in self._stubs: + self._stubs["streaming_annotate_video"] = self.grpc_channel.stream_stream( + "/google.cloud.videointelligence.v1p3beta1.StreamingVideoIntelligenceService/StreamingAnnotateVideo", + request_serializer=video_intelligence.StreamingAnnotateVideoRequest.serialize, + response_deserializer=video_intelligence.StreamingAnnotateVideoResponse.deserialize, + ) + return self._stubs["streaming_annotate_video"] + + +__all__ = ("StreamingVideoIntelligenceServiceGrpcAsyncIOTransport",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/__init__.py new file mode 100644 index 000000000000..559320286ca2 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/__init__.py @@ -0,0 +1,24 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from .client import VideoIntelligenceServiceClient +from .async_client import VideoIntelligenceServiceAsyncClient + +__all__ = ( + "VideoIntelligenceServiceClient", + "VideoIntelligenceServiceAsyncClient", +) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/async_client.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/async_client.py new file mode 100644 index 000000000000..43004c36dee1 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/async_client.py @@ -0,0 +1,270 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from collections import OrderedDict +import functools +import re +from typing import Dict, Sequence, Tuple, Type, Union +import pkg_resources + +import google.api_core.client_options as ClientOptions # type: ignore +from google.api_core import exceptions # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google.api_core import retry as retries # type: ignore +from google.auth import credentials # type: ignore +from google.oauth2 import service_account # type: ignore + +from google.api_core import operation # type: ignore +from google.api_core import operation_async # type: ignore +from google.cloud.videointelligence_v1p3beta1.types import video_intelligence + +from .transports.base import VideoIntelligenceServiceTransport, DEFAULT_CLIENT_INFO +from .transports.grpc_asyncio import VideoIntelligenceServiceGrpcAsyncIOTransport +from .client import VideoIntelligenceServiceClient + + +class VideoIntelligenceServiceAsyncClient: + """Service that implements the Video Intelligence API.""" + + _client: VideoIntelligenceServiceClient + + DEFAULT_ENDPOINT = VideoIntelligenceServiceClient.DEFAULT_ENDPOINT + DEFAULT_MTLS_ENDPOINT = VideoIntelligenceServiceClient.DEFAULT_MTLS_ENDPOINT + + common_billing_account_path = staticmethod( + VideoIntelligenceServiceClient.common_billing_account_path + ) + parse_common_billing_account_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_billing_account_path + ) + + common_folder_path = staticmethod(VideoIntelligenceServiceClient.common_folder_path) + parse_common_folder_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_folder_path + ) + + common_organization_path = staticmethod( + VideoIntelligenceServiceClient.common_organization_path + ) + parse_common_organization_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_organization_path + ) + + common_project_path = staticmethod( + VideoIntelligenceServiceClient.common_project_path + ) + parse_common_project_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_project_path + ) + + common_location_path = staticmethod( + VideoIntelligenceServiceClient.common_location_path + ) + parse_common_location_path = staticmethod( + VideoIntelligenceServiceClient.parse_common_location_path + ) + + from_service_account_file = VideoIntelligenceServiceClient.from_service_account_file + from_service_account_json = from_service_account_file + + @property + def transport(self) -> VideoIntelligenceServiceTransport: + """Return the transport used by the client instance. + + Returns: + VideoIntelligenceServiceTransport: The transport used by the client instance. + """ + return self._client.transport + + get_transport_class = functools.partial( + type(VideoIntelligenceServiceClient).get_transport_class, + type(VideoIntelligenceServiceClient), + ) + + def __init__( + self, + *, + credentials: credentials.Credentials = None, + transport: Union[str, VideoIntelligenceServiceTransport] = "grpc_asyncio", + client_options: ClientOptions = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the video intelligence service client. + + Args: + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + transport (Union[str, ~.VideoIntelligenceServiceTransport]): The + transport to use. If set to None, a transport is chosen + automatically. + client_options (ClientOptions): Custom options for the client. It + won't take effect if a ``transport`` instance is provided. + (1) The ``api_endpoint`` property can be used to override the + default endpoint provided by the client. GOOGLE_API_USE_MTLS_ENDPOINT + environment variable can also be used to override the endpoint: + "always" (always use the default mTLS endpoint), "never" (always + use the default regular endpoint) and "auto" (auto switch to the + default mTLS endpoint if client certificate is present, this is + the default value). However, the ``api_endpoint`` property takes + precedence if provided. + (2) If GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable + is "true", then the ``client_cert_source`` property can be used + to provide client certificate for mutual TLS transport. If + not provided, the default SSL client certificate will be used if + present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not + set, no client certificate will be used. + + Raises: + google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport + creation failed for any reason. + """ + + self._client = VideoIntelligenceServiceClient( + credentials=credentials, + transport=transport, + client_options=client_options, + client_info=client_info, + ) + + async def annotate_video( + self, + request: video_intelligence.AnnotateVideoRequest = None, + *, + input_uri: str = None, + features: Sequence[video_intelligence.Feature] = None, + retry: retries.Retry = gapic_v1.method.DEFAULT, + timeout: float = None, + metadata: Sequence[Tuple[str, str]] = (), + ) -> operation_async.AsyncOperation: + r"""Performs asynchronous video annotation. Progress and results can + be retrieved through the ``google.longrunning.Operations`` + interface. ``Operation.metadata`` contains + ``AnnotateVideoProgress`` (progress). ``Operation.response`` + contains ``AnnotateVideoResponse`` (results). + + Args: + request (:class:`~.video_intelligence.AnnotateVideoRequest`): + The request object. Video annotation request. + input_uri (:class:`str`): + Input video location. Currently, only `Cloud + Storage `__ URIs are + supported. URIs must be specified in the following + format: ``gs://bucket-id/object-id`` (other URI formats + return + [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). + For more information, see `Request + URIs `__. + To identify multiple videos, a video URI may include + wildcards in the ``object-id``. Supported wildcards: '*' + to match 0 or more characters; '?' to match 1 character. + If unset, the input video should be embedded in the + request as ``input_content``. If set, ``input_content`` + must be unset. + This corresponds to the ``input_uri`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + features (:class:`Sequence[~.video_intelligence.Feature]`): + Required. Requested video annotation + features. + This corresponds to the ``features`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, str]]): Strings which should be + sent along with the request as metadata. + + Returns: + ~.operation_async.AsyncOperation: + An object representing a long-running operation. + + The result type for the operation will be + :class:``~.video_intelligence.AnnotateVideoResponse``: + Video annotation response. Included in the ``response`` + field of the ``Operation`` returned by the + ``GetOperation`` call of the + ``google::longrunning::Operations`` service. + + """ + # Create or coerce a protobuf request object. + # Sanity check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + has_flattened_params = any([input_uri, features]) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + request = video_intelligence.AnnotateVideoRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + + if input_uri is not None: + request.input_uri = input_uri + + if features: + request.features.extend(features) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = gapic_v1.method_async.wrap_method( + self._client._transport.annotate_video, + default_retry=retries.Retry( + initial=1.0, + maximum=120.0, + multiplier=2.5, + predicate=retries.if_exception_type( + exceptions.DeadlineExceeded, exceptions.ServiceUnavailable, + ), + ), + default_timeout=600.0, + client_info=DEFAULT_CLIENT_INFO, + ) + + # Send the request. + response = await rpc(request, retry=retry, timeout=timeout, metadata=metadata,) + + # Wrap the response in an operation future. + response = operation_async.from_gapic( + response, + self._client._transport.operations_client, + video_intelligence.AnnotateVideoResponse, + metadata_type=video_intelligence.AnnotateVideoProgress, + ) + + # Done; return the response. + return response + + +try: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=pkg_resources.get_distribution( + "google-cloud-videointelligence", + ).version, + ) +except pkg_resources.DistributionNotFound: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo() + + +__all__ = ("VideoIntelligenceServiceAsyncClient",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/client.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/client.py new file mode 100644 index 000000000000..526c99d7b713 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/client.py @@ -0,0 +1,438 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from collections import OrderedDict +from distutils import util +import os +import re +from typing import Callable, Dict, Optional, Sequence, Tuple, Type, Union +import pkg_resources + +from google.api_core import client_options as client_options_lib # type: ignore +from google.api_core import exceptions # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google.api_core import retry as retries # type: ignore +from google.auth import credentials # type: ignore +from google.auth.transport import mtls # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore +from google.auth.exceptions import MutualTLSChannelError # type: ignore +from google.oauth2 import service_account # type: ignore + +from google.api_core import operation # type: ignore +from google.api_core import operation_async # type: ignore +from google.cloud.videointelligence_v1p3beta1.types import video_intelligence + +from .transports.base import VideoIntelligenceServiceTransport, DEFAULT_CLIENT_INFO +from .transports.grpc import VideoIntelligenceServiceGrpcTransport +from .transports.grpc_asyncio import VideoIntelligenceServiceGrpcAsyncIOTransport + + +class VideoIntelligenceServiceClientMeta(type): + """Metaclass for the VideoIntelligenceService client. + + This provides class-level methods for building and retrieving + support objects (e.g. transport) without polluting the client instance + objects. + """ + + _transport_registry = ( + OrderedDict() + ) # type: Dict[str, Type[VideoIntelligenceServiceTransport]] + _transport_registry["grpc"] = VideoIntelligenceServiceGrpcTransport + _transport_registry["grpc_asyncio"] = VideoIntelligenceServiceGrpcAsyncIOTransport + + def get_transport_class( + cls, label: str = None, + ) -> Type[VideoIntelligenceServiceTransport]: + """Return an appropriate transport class. + + Args: + label: The name of the desired transport. If none is + provided, then the first transport in the registry is used. + + Returns: + The transport class to use. + """ + # If a specific transport is requested, return that one. + if label: + return cls._transport_registry[label] + + # No transport is requested; return the default (that is, the first one + # in the dictionary). + return next(iter(cls._transport_registry.values())) + + +class VideoIntelligenceServiceClient(metaclass=VideoIntelligenceServiceClientMeta): + """Service that implements the Video Intelligence API.""" + + @staticmethod + def _get_default_mtls_endpoint(api_endpoint): + """Convert api endpoint to mTLS endpoint. + Convert "*.sandbox.googleapis.com" and "*.googleapis.com" to + "*.mtls.sandbox.googleapis.com" and "*.mtls.googleapis.com" respectively. + Args: + api_endpoint (Optional[str]): the api endpoint to convert. + Returns: + str: converted mTLS api endpoint. + """ + if not api_endpoint: + return api_endpoint + + mtls_endpoint_re = re.compile( + r"(?P[^.]+)(?P\.mtls)?(?P\.sandbox)?(?P\.googleapis\.com)?" + ) + + m = mtls_endpoint_re.match(api_endpoint) + name, mtls, sandbox, googledomain = m.groups() + if mtls or not googledomain: + return api_endpoint + + if sandbox: + return api_endpoint.replace( + "sandbox.googleapis.com", "mtls.sandbox.googleapis.com" + ) + + return api_endpoint.replace(".googleapis.com", ".mtls.googleapis.com") + + DEFAULT_ENDPOINT = "videointelligence.googleapis.com" + DEFAULT_MTLS_ENDPOINT = _get_default_mtls_endpoint.__func__( # type: ignore + DEFAULT_ENDPOINT + ) + + @classmethod + def from_service_account_file(cls, filename: str, *args, **kwargs): + """Creates an instance of this client using the provided credentials + file. + + Args: + filename (str): The path to the service account private key json + file. + args: Additional arguments to pass to the constructor. + kwargs: Additional arguments to pass to the constructor. + + Returns: + {@api.name}: The constructed client. + """ + credentials = service_account.Credentials.from_service_account_file(filename) + kwargs["credentials"] = credentials + return cls(*args, **kwargs) + + from_service_account_json = from_service_account_file + + @property + def transport(self) -> VideoIntelligenceServiceTransport: + """Return the transport used by the client instance. + + Returns: + VideoIntelligenceServiceTransport: The transport used by the client instance. + """ + return self._transport + + @staticmethod + def common_billing_account_path(billing_account: str,) -> str: + """Return a fully-qualified billing_account string.""" + return "billingAccounts/{billing_account}".format( + billing_account=billing_account, + ) + + @staticmethod + def parse_common_billing_account_path(path: str) -> Dict[str, str]: + """Parse a billing_account path into its component segments.""" + m = re.match(r"^billingAccounts/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_folder_path(folder: str,) -> str: + """Return a fully-qualified folder string.""" + return "folders/{folder}".format(folder=folder,) + + @staticmethod + def parse_common_folder_path(path: str) -> Dict[str, str]: + """Parse a folder path into its component segments.""" + m = re.match(r"^folders/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_organization_path(organization: str,) -> str: + """Return a fully-qualified organization string.""" + return "organizations/{organization}".format(organization=organization,) + + @staticmethod + def parse_common_organization_path(path: str) -> Dict[str, str]: + """Parse a organization path into its component segments.""" + m = re.match(r"^organizations/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_project_path(project: str,) -> str: + """Return a fully-qualified project string.""" + return "projects/{project}".format(project=project,) + + @staticmethod + def parse_common_project_path(path: str) -> Dict[str, str]: + """Parse a project path into its component segments.""" + m = re.match(r"^projects/(?P.+?)$", path) + return m.groupdict() if m else {} + + @staticmethod + def common_location_path(project: str, location: str,) -> str: + """Return a fully-qualified location string.""" + return "projects/{project}/locations/{location}".format( + project=project, location=location, + ) + + @staticmethod + def parse_common_location_path(path: str) -> Dict[str, str]: + """Parse a location path into its component segments.""" + m = re.match(r"^projects/(?P.+?)/locations/(?P.+?)$", path) + return m.groupdict() if m else {} + + def __init__( + self, + *, + credentials: Optional[credentials.Credentials] = None, + transport: Union[str, VideoIntelligenceServiceTransport, None] = None, + client_options: Optional[client_options_lib.ClientOptions] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the video intelligence service client. + + Args: + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + transport (Union[str, ~.VideoIntelligenceServiceTransport]): The + transport to use. If set to None, a transport is chosen + automatically. + client_options (client_options_lib.ClientOptions): Custom options for the + client. It won't take effect if a ``transport`` instance is provided. + (1) The ``api_endpoint`` property can be used to override the + default endpoint provided by the client. GOOGLE_API_USE_MTLS_ENDPOINT + environment variable can also be used to override the endpoint: + "always" (always use the default mTLS endpoint), "never" (always + use the default regular endpoint) and "auto" (auto switch to the + default mTLS endpoint if client certificate is present, this is + the default value). However, the ``api_endpoint`` property takes + precedence if provided. + (2) If GOOGLE_API_USE_CLIENT_CERTIFICATE environment variable + is "true", then the ``client_cert_source`` property can be used + to provide client certificate for mutual TLS transport. If + not provided, the default SSL client certificate will be used if + present. If GOOGLE_API_USE_CLIENT_CERTIFICATE is "false" or not + set, no client certificate will be used. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If mutual TLS transport + creation failed for any reason. + """ + if isinstance(client_options, dict): + client_options = client_options_lib.from_dict(client_options) + if client_options is None: + client_options = client_options_lib.ClientOptions() + + # Create SSL credentials for mutual TLS if needed. + use_client_cert = bool( + util.strtobool(os.getenv("GOOGLE_API_USE_CLIENT_CERTIFICATE", "false")) + ) + + ssl_credentials = None + is_mtls = False + if use_client_cert: + if client_options.client_cert_source: + import grpc # type: ignore + + cert, key = client_options.client_cert_source() + ssl_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + is_mtls = True + else: + creds = SslCredentials() + is_mtls = creds.is_mtls + ssl_credentials = creds.ssl_credentials if is_mtls else None + + # Figure out which api endpoint to use. + if client_options.api_endpoint is not None: + api_endpoint = client_options.api_endpoint + else: + use_mtls_env = os.getenv("GOOGLE_API_USE_MTLS_ENDPOINT", "auto") + if use_mtls_env == "never": + api_endpoint = self.DEFAULT_ENDPOINT + elif use_mtls_env == "always": + api_endpoint = self.DEFAULT_MTLS_ENDPOINT + elif use_mtls_env == "auto": + api_endpoint = ( + self.DEFAULT_MTLS_ENDPOINT if is_mtls else self.DEFAULT_ENDPOINT + ) + else: + raise MutualTLSChannelError( + "Unsupported GOOGLE_API_USE_MTLS_ENDPOINT value. Accepted values: never, auto, always" + ) + + # Save or instantiate the transport. + # Ordinarily, we provide the transport, but allowing a custom transport + # instance provides an extensibility point for unusual situations. + if isinstance(transport, VideoIntelligenceServiceTransport): + # transport is a VideoIntelligenceServiceTransport instance. + if credentials or client_options.credentials_file: + raise ValueError( + "When providing a transport instance, " + "provide its credentials directly." + ) + if client_options.scopes: + raise ValueError( + "When providing a transport instance, " + "provide its scopes directly." + ) + self._transport = transport + else: + Transport = type(self).get_transport_class(transport) + self._transport = Transport( + credentials=credentials, + credentials_file=client_options.credentials_file, + host=api_endpoint, + scopes=client_options.scopes, + ssl_channel_credentials=ssl_credentials, + quota_project_id=client_options.quota_project_id, + client_info=client_info, + ) + + def annotate_video( + self, + request: video_intelligence.AnnotateVideoRequest = None, + *, + input_uri: str = None, + features: Sequence[video_intelligence.Feature] = None, + retry: retries.Retry = gapic_v1.method.DEFAULT, + timeout: float = None, + metadata: Sequence[Tuple[str, str]] = (), + ) -> operation.Operation: + r"""Performs asynchronous video annotation. Progress and results can + be retrieved through the ``google.longrunning.Operations`` + interface. ``Operation.metadata`` contains + ``AnnotateVideoProgress`` (progress). ``Operation.response`` + contains ``AnnotateVideoResponse`` (results). + + Args: + request (:class:`~.video_intelligence.AnnotateVideoRequest`): + The request object. Video annotation request. + input_uri (:class:`str`): + Input video location. Currently, only `Cloud + Storage `__ URIs are + supported. URIs must be specified in the following + format: ``gs://bucket-id/object-id`` (other URI formats + return + [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). + For more information, see `Request + URIs `__. + To identify multiple videos, a video URI may include + wildcards in the ``object-id``. Supported wildcards: '*' + to match 0 or more characters; '?' to match 1 character. + If unset, the input video should be embedded in the + request as ``input_content``. If set, ``input_content`` + must be unset. + This corresponds to the ``input_uri`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + features (:class:`Sequence[~.video_intelligence.Feature]`): + Required. Requested video annotation + features. + This corresponds to the ``features`` field + on the ``request`` instance; if ``request`` is provided, this + should not be set. + + retry (google.api_core.retry.Retry): Designation of what errors, if any, + should be retried. + timeout (float): The timeout for this request. + metadata (Sequence[Tuple[str, str]]): Strings which should be + sent along with the request as metadata. + + Returns: + ~.operation.Operation: + An object representing a long-running operation. + + The result type for the operation will be + :class:``~.video_intelligence.AnnotateVideoResponse``: + Video annotation response. Included in the ``response`` + field of the ``Operation`` returned by the + ``GetOperation`` call of the + ``google::longrunning::Operations`` service. + + """ + # Create or coerce a protobuf request object. + # Sanity check: If we got a request object, we should *not* have + # gotten any keyword arguments that map to the request. + has_flattened_params = any([input_uri, features]) + if request is not None and has_flattened_params: + raise ValueError( + "If the `request` argument is set, then none of " + "the individual field arguments should be set." + ) + + # Minor optimization to avoid making a copy if the user passes + # in a video_intelligence.AnnotateVideoRequest. + # There's no risk of modifying the input as we've already verified + # there are no flattened fields. + if not isinstance(request, video_intelligence.AnnotateVideoRequest): + request = video_intelligence.AnnotateVideoRequest(request) + + # If we have keyword arguments corresponding to fields on the + # request, apply these. + + if input_uri is not None: + request.input_uri = input_uri + + if features: + request.features.extend(features) + + # Wrap the RPC method; this adds retry and timeout information, + # and friendly error handling. + rpc = self._transport._wrapped_methods[self._transport.annotate_video] + + # Send the request. + response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,) + + # Wrap the response in an operation future. + response = operation.from_gapic( + response, + self._transport.operations_client, + video_intelligence.AnnotateVideoResponse, + metadata_type=video_intelligence.AnnotateVideoProgress, + ) + + # Done; return the response. + return response + + +try: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=pkg_resources.get_distribution( + "google-cloud-videointelligence", + ).version, + ) +except pkg_resources.DistributionNotFound: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo() + + +__all__ = ("VideoIntelligenceServiceClient",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/transports/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/transports/__init__.py new file mode 100644 index 000000000000..08de302230ba --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/transports/__init__.py @@ -0,0 +1,38 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from collections import OrderedDict +from typing import Dict, Type + +from .base import VideoIntelligenceServiceTransport +from .grpc import VideoIntelligenceServiceGrpcTransport +from .grpc_asyncio import VideoIntelligenceServiceGrpcAsyncIOTransport + + +# Compile a registry of transports. +_transport_registry = ( + OrderedDict() +) # type: Dict[str, Type[VideoIntelligenceServiceTransport]] +_transport_registry["grpc"] = VideoIntelligenceServiceGrpcTransport +_transport_registry["grpc_asyncio"] = VideoIntelligenceServiceGrpcAsyncIOTransport + + +__all__ = ( + "VideoIntelligenceServiceTransport", + "VideoIntelligenceServiceGrpcTransport", + "VideoIntelligenceServiceGrpcAsyncIOTransport", +) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/transports/base.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/transports/base.py new file mode 100644 index 000000000000..d2368ab14f0e --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/transports/base.py @@ -0,0 +1,141 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import abc +import typing +import pkg_resources + +from google import auth # type: ignore +from google.api_core import exceptions # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google.api_core import retry as retries # type: ignore +from google.api_core import operations_v1 # type: ignore +from google.auth import credentials # type: ignore + +from google.cloud.videointelligence_v1p3beta1.types import video_intelligence +from google.longrunning import operations_pb2 as operations # type: ignore + + +try: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo( + gapic_version=pkg_resources.get_distribution( + "google-cloud-videointelligence", + ).version, + ) +except pkg_resources.DistributionNotFound: + DEFAULT_CLIENT_INFO = gapic_v1.client_info.ClientInfo() + + +class VideoIntelligenceServiceTransport(abc.ABC): + """Abstract transport class for VideoIntelligenceService.""" + + AUTH_SCOPES = ("https://www.googleapis.com/auth/cloud-platform",) + + def __init__( + self, + *, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: typing.Optional[str] = None, + scopes: typing.Optional[typing.Sequence[str]] = AUTH_SCOPES, + quota_project_id: typing.Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + **kwargs, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): The hostname to connect to. + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is mutually exclusive with credentials. + scope (Optional[Sequence[str]]): A list of scopes. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + """ + # Save the hostname. Default to port 443 (HTTPS) if none is specified. + if ":" not in host: + host += ":443" + self._host = host + + # If no credentials are provided, then determine the appropriate + # defaults. + if credentials and credentials_file: + raise exceptions.DuplicateCredentialArgs( + "'credentials_file' and 'credentials' are mutually exclusive" + ) + + if credentials_file is not None: + credentials, _ = auth.load_credentials_from_file( + credentials_file, scopes=scopes, quota_project_id=quota_project_id + ) + + elif credentials is None: + credentials, _ = auth.default( + scopes=scopes, quota_project_id=quota_project_id + ) + + # Save the credentials. + self._credentials = credentials + + # Lifted into its own function so it can be stubbed out during tests. + self._prep_wrapped_messages(client_info) + + def _prep_wrapped_messages(self, client_info): + # Precompute the wrapped methods. + self._wrapped_methods = { + self.annotate_video: gapic_v1.method.wrap_method( + self.annotate_video, + default_retry=retries.Retry( + initial=1.0, + maximum=120.0, + multiplier=2.5, + predicate=retries.if_exception_type( + exceptions.DeadlineExceeded, exceptions.ServiceUnavailable, + ), + ), + default_timeout=600.0, + client_info=client_info, + ), + } + + @property + def operations_client(self) -> operations_v1.OperationsClient: + """Return the client designed to process long-running operations.""" + raise NotImplementedError() + + @property + def annotate_video( + self, + ) -> typing.Callable[ + [video_intelligence.AnnotateVideoRequest], + typing.Union[operations.Operation, typing.Awaitable[operations.Operation]], + ]: + raise NotImplementedError() + + +__all__ = ("VideoIntelligenceServiceTransport",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/transports/grpc.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/transports/grpc.py new file mode 100644 index 000000000000..862d59dd7c0e --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/transports/grpc.py @@ -0,0 +1,281 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import warnings +from typing import Callable, Dict, Optional, Sequence, Tuple + +from google.api_core import grpc_helpers # type: ignore +from google.api_core import operations_v1 # type: ignore +from google.api_core import gapic_v1 # type: ignore +from google import auth # type: ignore +from google.auth import credentials # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore + +import grpc # type: ignore + +from google.cloud.videointelligence_v1p3beta1.types import video_intelligence +from google.longrunning import operations_pb2 as operations # type: ignore + +from .base import VideoIntelligenceServiceTransport, DEFAULT_CLIENT_INFO + + +class VideoIntelligenceServiceGrpcTransport(VideoIntelligenceServiceTransport): + """gRPC backend transport for VideoIntelligenceService. + + Service that implements the Video Intelligence API. + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends protocol buffers over the wire using gRPC (which is built on + top of HTTP/2); the ``grpcio`` package must be installed. + """ + + _stubs: Dict[str, Callable] + + def __init__( + self, + *, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: str = None, + scopes: Sequence[str] = None, + channel: grpc.Channel = None, + api_mtls_endpoint: str = None, + client_cert_source: Callable[[], Tuple[bytes, bytes]] = None, + ssl_channel_credentials: grpc.ChannelCredentials = None, + quota_project_id: Optional[str] = None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): The hostname to connect to. + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + This argument is ignored if ``channel`` is provided. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if ``channel`` is provided. + scopes (Optional(Sequence[str])): A list of scopes. This argument is + ignored if ``channel`` is provided. + channel (Optional[grpc.Channel]): A ``Channel`` instance through + which to make calls. + api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint. + If provided, it overrides the ``host`` argument and tries to create + a mutual TLS channel with client SSL credentials from + ``client_cert_source`` or applicatin default SSL credentials. + client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]): + Deprecated. A callback to provide client SSL certificate bytes and + private key bytes, both in PEM format. It is ignored if + ``api_mtls_endpoint`` is None. + ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials + for grpc channel. It is ignored if ``channel`` is provided. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTLSChannelError: If mutual TLS transport + creation failed for any reason. + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + self._ssl_channel_credentials = ssl_channel_credentials + + if channel: + # Sanity check: Ensure that channel and credentials are not both + # provided. + credentials = False + + # If a channel was explicitly provided, set it. + self._grpc_channel = channel + self._ssl_channel_credentials = None + elif api_mtls_endpoint: + warnings.warn( + "api_mtls_endpoint and client_cert_source are deprecated", + DeprecationWarning, + ) + + host = ( + api_mtls_endpoint + if ":" in api_mtls_endpoint + else api_mtls_endpoint + ":443" + ) + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # Create SSL credentials with client_cert_source or application + # default SSL credentials. + if client_cert_source: + cert, key = client_cert_source() + ssl_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + else: + ssl_credentials = SslCredentials().ssl_credentials + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + self._ssl_channel_credentials = ssl_credentials + else: + host = host if ":" in host else host + ":443" + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_channel_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + + self._stubs = {} # type: Dict[str, Callable] + + # Run the base constructor. + super().__init__( + host=host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + client_info=client_info, + ) + + @classmethod + def create_channel( + cls, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: str = None, + scopes: Optional[Sequence[str]] = None, + quota_project_id: Optional[str] = None, + **kwargs, + ) -> grpc.Channel: + """Create and return a gRPC channel object. + Args: + address (Optionsl[str]): The host for the channel to use. + credentials (Optional[~.Credentials]): The + authorization credentials to attach to requests. These + credentials identify this application to the service. If + none are specified, the client will attempt to ascertain + the credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is mutually exclusive with credentials. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + kwargs (Optional[dict]): Keyword arguments, which are passed to the + channel creation. + Returns: + grpc.Channel: A gRPC channel object. + + Raises: + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + scopes = scopes or cls.AUTH_SCOPES + return grpc_helpers.create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes, + quota_project_id=quota_project_id, + **kwargs, + ) + + @property + def grpc_channel(self) -> grpc.Channel: + """Return the channel designed to connect to this service. + """ + return self._grpc_channel + + @property + def operations_client(self) -> operations_v1.OperationsClient: + """Create the client designed to process long-running operations. + + This property caches on the instance; repeated calls return the same + client. + """ + # Sanity check: Only create a new client if we do not already have one. + if "operations_client" not in self.__dict__: + self.__dict__["operations_client"] = operations_v1.OperationsClient( + self.grpc_channel + ) + + # Return the client from cache. + return self.__dict__["operations_client"] + + @property + def annotate_video( + self, + ) -> Callable[[video_intelligence.AnnotateVideoRequest], operations.Operation]: + r"""Return a callable for the annotate video method over gRPC. + + Performs asynchronous video annotation. Progress and results can + be retrieved through the ``google.longrunning.Operations`` + interface. ``Operation.metadata`` contains + ``AnnotateVideoProgress`` (progress). ``Operation.response`` + contains ``AnnotateVideoResponse`` (results). + + Returns: + Callable[[~.AnnotateVideoRequest], + ~.Operation]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "annotate_video" not in self._stubs: + self._stubs["annotate_video"] = self.grpc_channel.unary_unary( + "/google.cloud.videointelligence.v1p3beta1.VideoIntelligenceService/AnnotateVideo", + request_serializer=video_intelligence.AnnotateVideoRequest.serialize, + response_deserializer=operations.Operation.FromString, + ) + return self._stubs["annotate_video"] + + +__all__ = ("VideoIntelligenceServiceGrpcTransport",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/transports/grpc_asyncio.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/transports/grpc_asyncio.py new file mode 100644 index 000000000000..0654b228f914 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/services/video_intelligence_service/transports/grpc_asyncio.py @@ -0,0 +1,287 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import warnings +from typing import Awaitable, Callable, Dict, Optional, Sequence, Tuple + +from google.api_core import gapic_v1 # type: ignore +from google.api_core import grpc_helpers_async # type: ignore +from google.api_core import operations_v1 # type: ignore +from google import auth # type: ignore +from google.auth import credentials # type: ignore +from google.auth.transport.grpc import SslCredentials # type: ignore + +import grpc # type: ignore +from grpc.experimental import aio # type: ignore + +from google.cloud.videointelligence_v1p3beta1.types import video_intelligence +from google.longrunning import operations_pb2 as operations # type: ignore + +from .base import VideoIntelligenceServiceTransport, DEFAULT_CLIENT_INFO +from .grpc import VideoIntelligenceServiceGrpcTransport + + +class VideoIntelligenceServiceGrpcAsyncIOTransport(VideoIntelligenceServiceTransport): + """gRPC AsyncIO backend transport for VideoIntelligenceService. + + Service that implements the Video Intelligence API. + + This class defines the same methods as the primary client, so the + primary client can load the underlying transport implementation + and call it. + + It sends protocol buffers over the wire using gRPC (which is built on + top of HTTP/2); the ``grpcio`` package must be installed. + """ + + _grpc_channel: aio.Channel + _stubs: Dict[str, Callable] = {} + + @classmethod + def create_channel( + cls, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + quota_project_id: Optional[str] = None, + **kwargs, + ) -> aio.Channel: + """Create and return a gRPC AsyncIO channel object. + Args: + address (Optional[str]): The host for the channel to use. + credentials (Optional[~.Credentials]): The + authorization credentials to attach to requests. These + credentials identify this application to the service. If + none are specified, the client will attempt to ascertain + the credentials from the environment. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if ``channel`` is provided. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + kwargs (Optional[dict]): Keyword arguments, which are passed to the + channel creation. + Returns: + aio.Channel: A gRPC AsyncIO channel object. + """ + scopes = scopes or cls.AUTH_SCOPES + return grpc_helpers_async.create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes, + quota_project_id=quota_project_id, + **kwargs, + ) + + def __init__( + self, + *, + host: str = "videointelligence.googleapis.com", + credentials: credentials.Credentials = None, + credentials_file: Optional[str] = None, + scopes: Optional[Sequence[str]] = None, + channel: aio.Channel = None, + api_mtls_endpoint: str = None, + client_cert_source: Callable[[], Tuple[bytes, bytes]] = None, + ssl_channel_credentials: grpc.ChannelCredentials = None, + quota_project_id=None, + client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, + ) -> None: + """Instantiate the transport. + + Args: + host (Optional[str]): The hostname to connect to. + credentials (Optional[google.auth.credentials.Credentials]): The + authorization credentials to attach to requests. These + credentials identify the application to the service; if none + are specified, the client will attempt to ascertain the + credentials from the environment. + This argument is ignored if ``channel`` is provided. + credentials_file (Optional[str]): A file with credentials that can + be loaded with :func:`google.auth.load_credentials_from_file`. + This argument is ignored if ``channel`` is provided. + scopes (Optional[Sequence[str]]): A optional list of scopes needed for this + service. These are only used when credentials are not specified and + are passed to :func:`google.auth.default`. + channel (Optional[aio.Channel]): A ``Channel`` instance through + which to make calls. + api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint. + If provided, it overrides the ``host`` argument and tries to create + a mutual TLS channel with client SSL credentials from + ``client_cert_source`` or applicatin default SSL credentials. + client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]): + Deprecated. A callback to provide client SSL certificate bytes and + private key bytes, both in PEM format. It is ignored if + ``api_mtls_endpoint`` is None. + ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials + for grpc channel. It is ignored if ``channel`` is provided. + quota_project_id (Optional[str]): An optional project to use for billing + and quota. + client_info (google.api_core.gapic_v1.client_info.ClientInfo): + The client info used to send a user-agent string along with + API requests. If ``None``, then default info will be used. + Generally, you only need to set this if you're developing + your own client library. + + Raises: + google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport + creation failed for any reason. + google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` + and ``credentials_file`` are passed. + """ + self._ssl_channel_credentials = ssl_channel_credentials + + if channel: + # Sanity check: Ensure that channel and credentials are not both + # provided. + credentials = False + + # If a channel was explicitly provided, set it. + self._grpc_channel = channel + self._ssl_channel_credentials = None + elif api_mtls_endpoint: + warnings.warn( + "api_mtls_endpoint and client_cert_source are deprecated", + DeprecationWarning, + ) + + host = ( + api_mtls_endpoint + if ":" in api_mtls_endpoint + else api_mtls_endpoint + ":443" + ) + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # Create SSL credentials with client_cert_source or application + # default SSL credentials. + if client_cert_source: + cert, key = client_cert_source() + ssl_credentials = grpc.ssl_channel_credentials( + certificate_chain=cert, private_key=key + ) + else: + ssl_credentials = SslCredentials().ssl_credentials + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + self._ssl_channel_credentials = ssl_credentials + else: + host = host if ":" in host else host + ":443" + + if credentials is None: + credentials, _ = auth.default( + scopes=self.AUTH_SCOPES, quota_project_id=quota_project_id + ) + + # create a new channel. The provided one is ignored. + self._grpc_channel = type(self).create_channel( + host, + credentials=credentials, + credentials_file=credentials_file, + ssl_credentials=ssl_channel_credentials, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + ) + + # Run the base constructor. + super().__init__( + host=host, + credentials=credentials, + credentials_file=credentials_file, + scopes=scopes or self.AUTH_SCOPES, + quota_project_id=quota_project_id, + client_info=client_info, + ) + + self._stubs = {} + + @property + def grpc_channel(self) -> aio.Channel: + """Create the channel designed to connect to this service. + + This property caches on the instance; repeated calls return + the same channel. + """ + # Return the channel from cache. + return self._grpc_channel + + @property + def operations_client(self) -> operations_v1.OperationsAsyncClient: + """Create the client designed to process long-running operations. + + This property caches on the instance; repeated calls return the same + client. + """ + # Sanity check: Only create a new client if we do not already have one. + if "operations_client" not in self.__dict__: + self.__dict__["operations_client"] = operations_v1.OperationsAsyncClient( + self.grpc_channel + ) + + # Return the client from cache. + return self.__dict__["operations_client"] + + @property + def annotate_video( + self, + ) -> Callable[ + [video_intelligence.AnnotateVideoRequest], Awaitable[operations.Operation] + ]: + r"""Return a callable for the annotate video method over gRPC. + + Performs asynchronous video annotation. Progress and results can + be retrieved through the ``google.longrunning.Operations`` + interface. ``Operation.metadata`` contains + ``AnnotateVideoProgress`` (progress). ``Operation.response`` + contains ``AnnotateVideoResponse`` (results). + + Returns: + Callable[[~.AnnotateVideoRequest], + Awaitable[~.Operation]]: + A function that, when called, will call the underlying RPC + on the server. + """ + # Generate a "stub function" on-the-fly which will actually make + # the request. + # gRPC handles serialization and deserialization, so we just need + # to pass in the functions for each. + if "annotate_video" not in self._stubs: + self._stubs["annotate_video"] = self.grpc_channel.unary_unary( + "/google.cloud.videointelligence.v1p3beta1.VideoIntelligenceService/AnnotateVideo", + request_serializer=video_intelligence.AnnotateVideoRequest.serialize, + response_deserializer=operations.Operation.FromString, + ) + return self._stubs["annotate_video"] + + +__all__ = ("VideoIntelligenceServiceGrpcAsyncIOTransport",) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/types.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/types.py deleted file mode 100644 index 021939b69217..000000000000 --- a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/types.py +++ /dev/null @@ -1,56 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from __future__ import absolute_import -import sys - -from google.api_core.protobuf_helpers import get_messages - -from google.cloud.videointelligence_v1p3beta1.proto import video_intelligence_pb2 -from google.longrunning import operations_pb2 -from google.protobuf import any_pb2 -from google.protobuf import duration_pb2 -from google.protobuf import timestamp_pb2 -from google.rpc import status_pb2 - - -_shared_modules = [ - operations_pb2, - any_pb2, - duration_pb2, - timestamp_pb2, - status_pb2, -] - -_local_modules = [ - video_intelligence_pb2, -] - -names = [] - -for module in _shared_modules: # pragma: NO COVER - for name, message in get_messages(module).items(): - setattr(sys.modules[__name__], name, message) - names.append(name) -for module in _local_modules: - for name, message in get_messages(module).items(): - message.__module__ = "google.cloud.videointelligence_v1p3beta1.types" - setattr(sys.modules[__name__], name, message) - names.append(name) - - -__all__ = tuple(sorted(names)) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/types/__init__.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/types/__init__.py new file mode 100644 index 000000000000..a325c3d98ddf --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/types/__init__.py @@ -0,0 +1,133 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +from .video_intelligence import ( + AnnotateVideoRequest, + VideoContext, + LabelDetectionConfig, + ShotChangeDetectionConfig, + ObjectTrackingConfig, + ExplicitContentDetectionConfig, + FaceDetectionConfig, + PersonDetectionConfig, + TextDetectionConfig, + VideoSegment, + LabelSegment, + LabelFrame, + Entity, + LabelAnnotation, + ExplicitContentFrame, + ExplicitContentAnnotation, + NormalizedBoundingBox, + TimestampedObject, + Track, + DetectedAttribute, + Celebrity, + CelebrityTrack, + CelebrityRecognitionAnnotation, + DetectedLandmark, + FaceDetectionAnnotation, + PersonDetectionAnnotation, + VideoAnnotationResults, + AnnotateVideoResponse, + VideoAnnotationProgress, + AnnotateVideoProgress, + SpeechTranscriptionConfig, + SpeechContext, + SpeechTranscription, + SpeechRecognitionAlternative, + WordInfo, + NormalizedVertex, + NormalizedBoundingPoly, + TextSegment, + TextFrame, + TextAnnotation, + ObjectTrackingFrame, + ObjectTrackingAnnotation, + LogoRecognitionAnnotation, + StreamingAnnotateVideoRequest, + StreamingVideoConfig, + StreamingAnnotateVideoResponse, + StreamingVideoAnnotationResults, + StreamingShotChangeDetectionConfig, + StreamingLabelDetectionConfig, + StreamingExplicitContentDetectionConfig, + StreamingObjectTrackingConfig, + StreamingAutomlActionRecognitionConfig, + StreamingAutomlClassificationConfig, + StreamingAutomlObjectTrackingConfig, + StreamingStorageConfig, +) + + +__all__ = ( + "AnnotateVideoRequest", + "VideoContext", + "LabelDetectionConfig", + "ShotChangeDetectionConfig", + "ObjectTrackingConfig", + "ExplicitContentDetectionConfig", + "FaceDetectionConfig", + "PersonDetectionConfig", + "TextDetectionConfig", + "VideoSegment", + "LabelSegment", + "LabelFrame", + "Entity", + "LabelAnnotation", + "ExplicitContentFrame", + "ExplicitContentAnnotation", + "NormalizedBoundingBox", + "TimestampedObject", + "Track", + "DetectedAttribute", + "Celebrity", + "CelebrityTrack", + "CelebrityRecognitionAnnotation", + "DetectedLandmark", + "FaceDetectionAnnotation", + "PersonDetectionAnnotation", + "VideoAnnotationResults", + "AnnotateVideoResponse", + "VideoAnnotationProgress", + "AnnotateVideoProgress", + "SpeechTranscriptionConfig", + "SpeechContext", + "SpeechTranscription", + "SpeechRecognitionAlternative", + "WordInfo", + "NormalizedVertex", + "NormalizedBoundingPoly", + "TextSegment", + "TextFrame", + "TextAnnotation", + "ObjectTrackingFrame", + "ObjectTrackingAnnotation", + "LogoRecognitionAnnotation", + "StreamingAnnotateVideoRequest", + "StreamingVideoConfig", + "StreamingAnnotateVideoResponse", + "StreamingVideoAnnotationResults", + "StreamingShotChangeDetectionConfig", + "StreamingLabelDetectionConfig", + "StreamingExplicitContentDetectionConfig", + "StreamingObjectTrackingConfig", + "StreamingAutomlActionRecognitionConfig", + "StreamingAutomlClassificationConfig", + "StreamingAutomlObjectTrackingConfig", + "StreamingStorageConfig", +) diff --git a/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/types/video_intelligence.py b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/types/video_intelligence.py new file mode 100644 index 000000000000..31bfa49d7c51 --- /dev/null +++ b/packages/google-cloud-videointelligence/google/cloud/videointelligence_v1p3beta1/types/video_intelligence.py @@ -0,0 +1,1638 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import proto # type: ignore + + +from google.protobuf import duration_pb2 as duration # type: ignore +from google.protobuf import timestamp_pb2 as timestamp # type: ignore +from google.rpc import status_pb2 as status # type: ignore + + +__protobuf__ = proto.module( + package="google.cloud.videointelligence.v1p3beta1", + manifest={ + "LabelDetectionMode", + "Likelihood", + "StreamingFeature", + "Feature", + "AnnotateVideoRequest", + "VideoContext", + "LabelDetectionConfig", + "ShotChangeDetectionConfig", + "ObjectTrackingConfig", + "ExplicitContentDetectionConfig", + "FaceDetectionConfig", + "PersonDetectionConfig", + "TextDetectionConfig", + "VideoSegment", + "LabelSegment", + "LabelFrame", + "Entity", + "LabelAnnotation", + "ExplicitContentFrame", + "ExplicitContentAnnotation", + "NormalizedBoundingBox", + "TimestampedObject", + "Track", + "DetectedAttribute", + "Celebrity", + "CelebrityTrack", + "CelebrityRecognitionAnnotation", + "DetectedLandmark", + "FaceDetectionAnnotation", + "PersonDetectionAnnotation", + "VideoAnnotationResults", + "AnnotateVideoResponse", + "VideoAnnotationProgress", + "AnnotateVideoProgress", + "SpeechTranscriptionConfig", + "SpeechContext", + "SpeechTranscription", + "SpeechRecognitionAlternative", + "WordInfo", + "NormalizedVertex", + "NormalizedBoundingPoly", + "TextSegment", + "TextFrame", + "TextAnnotation", + "ObjectTrackingFrame", + "ObjectTrackingAnnotation", + "LogoRecognitionAnnotation", + "StreamingAnnotateVideoRequest", + "StreamingVideoConfig", + "StreamingAnnotateVideoResponse", + "StreamingVideoAnnotationResults", + "StreamingShotChangeDetectionConfig", + "StreamingLabelDetectionConfig", + "StreamingExplicitContentDetectionConfig", + "StreamingObjectTrackingConfig", + "StreamingAutomlActionRecognitionConfig", + "StreamingAutomlClassificationConfig", + "StreamingAutomlObjectTrackingConfig", + "StreamingStorageConfig", + }, +) + + +class LabelDetectionMode(proto.Enum): + r"""Label detection mode.""" + LABEL_DETECTION_MODE_UNSPECIFIED = 0 + SHOT_MODE = 1 + FRAME_MODE = 2 + SHOT_AND_FRAME_MODE = 3 + + +class Likelihood(proto.Enum): + r"""Bucketized representation of likelihood.""" + LIKELIHOOD_UNSPECIFIED = 0 + VERY_UNLIKELY = 1 + UNLIKELY = 2 + POSSIBLE = 3 + LIKELY = 4 + VERY_LIKELY = 5 + + +class StreamingFeature(proto.Enum): + r"""Streaming video annotation feature.""" + STREAMING_FEATURE_UNSPECIFIED = 0 + STREAMING_LABEL_DETECTION = 1 + STREAMING_SHOT_CHANGE_DETECTION = 2 + STREAMING_EXPLICIT_CONTENT_DETECTION = 3 + STREAMING_OBJECT_TRACKING = 4 + STREAMING_AUTOML_ACTION_RECOGNITION = 23 + STREAMING_AUTOML_CLASSIFICATION = 21 + STREAMING_AUTOML_OBJECT_TRACKING = 22 + + +class Feature(proto.Enum): + r"""Video annotation feature.""" + FEATURE_UNSPECIFIED = 0 + LABEL_DETECTION = 1 + SHOT_CHANGE_DETECTION = 2 + EXPLICIT_CONTENT_DETECTION = 3 + FACE_DETECTION = 4 + SPEECH_TRANSCRIPTION = 6 + TEXT_DETECTION = 7 + OBJECT_TRACKING = 9 + LOGO_RECOGNITION = 12 + CELEBRITY_RECOGNITION = 13 + PERSON_DETECTION = 14 + + +class AnnotateVideoRequest(proto.Message): + r"""Video annotation request. + + Attributes: + input_uri (str): + Input video location. Currently, only `Cloud + Storage `__ URIs are + supported. URIs must be specified in the following format: + ``gs://bucket-id/object-id`` (other URI formats return + [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). + For more information, see `Request + URIs `__. + To identify multiple videos, a video URI may include + wildcards in the ``object-id``. Supported wildcards: '*' to + match 0 or more characters; '?' to match 1 character. If + unset, the input video should be embedded in the request as + ``input_content``. If set, ``input_content`` must be unset. + input_content (bytes): + The video data bytes. If unset, the input video(s) should be + specified via the ``input_uri``. If set, ``input_uri`` must + be unset. + features (Sequence[~.video_intelligence.Feature]): + Required. Requested video annotation + features. + video_context (~.video_intelligence.VideoContext): + Additional video context and/or feature- + pecific parameters. + output_uri (str): + Optional. Location where the output (in JSON format) should + be stored. Currently, only `Cloud + Storage `__ URIs are + supported. These must be specified in the following format: + ``gs://bucket-id/object-id`` (other URI formats return + [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). + For more information, see `Request + URIs `__. + location_id (str): + Optional. Cloud region where annotation should take place. + Supported cloud regions are: ``us-east1``, ``us-west1``, + ``europe-west1``, ``asia-east1``. If no region is specified, + the region will be determined based on video file location. + """ + + input_uri = proto.Field(proto.STRING, number=1) + + input_content = proto.Field(proto.BYTES, number=6) + + features = proto.RepeatedField(proto.ENUM, number=2, enum="Feature",) + + video_context = proto.Field(proto.MESSAGE, number=3, message="VideoContext",) + + output_uri = proto.Field(proto.STRING, number=4) + + location_id = proto.Field(proto.STRING, number=5) + + +class VideoContext(proto.Message): + r"""Video context and/or feature-specific parameters. + + Attributes: + segments (Sequence[~.video_intelligence.VideoSegment]): + Video segments to annotate. The segments may + overlap and are not required to be contiguous or + span the whole video. If unspecified, each video + is treated as a single segment. + label_detection_config (~.video_intelligence.LabelDetectionConfig): + Config for LABEL_DETECTION. + shot_change_detection_config (~.video_intelligence.ShotChangeDetectionConfig): + Config for SHOT_CHANGE_DETECTION. + explicit_content_detection_config (~.video_intelligence.ExplicitContentDetectionConfig): + Config for EXPLICIT_CONTENT_DETECTION. + face_detection_config (~.video_intelligence.FaceDetectionConfig): + Config for FACE_DETECTION. + speech_transcription_config (~.video_intelligence.SpeechTranscriptionConfig): + Config for SPEECH_TRANSCRIPTION. + text_detection_config (~.video_intelligence.TextDetectionConfig): + Config for TEXT_DETECTION. + person_detection_config (~.video_intelligence.PersonDetectionConfig): + Config for PERSON_DETECTION. + object_tracking_config (~.video_intelligence.ObjectTrackingConfig): + Config for OBJECT_TRACKING. + """ + + segments = proto.RepeatedField(proto.MESSAGE, number=1, message="VideoSegment",) + + label_detection_config = proto.Field( + proto.MESSAGE, number=2, message="LabelDetectionConfig", + ) + + shot_change_detection_config = proto.Field( + proto.MESSAGE, number=3, message="ShotChangeDetectionConfig", + ) + + explicit_content_detection_config = proto.Field( + proto.MESSAGE, number=4, message="ExplicitContentDetectionConfig", + ) + + face_detection_config = proto.Field( + proto.MESSAGE, number=5, message="FaceDetectionConfig", + ) + + speech_transcription_config = proto.Field( + proto.MESSAGE, number=6, message="SpeechTranscriptionConfig", + ) + + text_detection_config = proto.Field( + proto.MESSAGE, number=8, message="TextDetectionConfig", + ) + + person_detection_config = proto.Field( + proto.MESSAGE, number=11, message="PersonDetectionConfig", + ) + + object_tracking_config = proto.Field( + proto.MESSAGE, number=13, message="ObjectTrackingConfig", + ) + + +class LabelDetectionConfig(proto.Message): + r"""Config for LABEL_DETECTION. + + Attributes: + label_detection_mode (~.video_intelligence.LabelDetectionMode): + What labels should be detected with LABEL_DETECTION, in + addition to video-level labels or segment-level labels. If + unspecified, defaults to ``SHOT_MODE``. + stationary_camera (bool): + Whether the video has been shot from a stationary (i.e., + non-moving) camera. When set to true, might improve + detection accuracy for moving objects. Should be used with + ``SHOT_AND_FRAME_MODE`` enabled. + model (str): + Model to use for label detection. + Supported values: "builtin/stable" (the default + if unset) and "builtin/latest". + frame_confidence_threshold (float): + The confidence threshold we perform filtering on the labels + from frame-level detection. If not set, it is set to 0.4 by + default. The valid range for this threshold is [0.1, 0.9]. + Any value set outside of this range will be clipped. Note: + For best results, follow the default threshold. We will + update the default threshold everytime when we release a new + model. + video_confidence_threshold (float): + The confidence threshold we perform filtering on the labels + from video-level and shot-level detections. If not set, it's + set to 0.3 by default. The valid range for this threshold is + [0.1, 0.9]. Any value set outside of this range will be + clipped. Note: For best results, follow the default + threshold. We will update the default threshold everytime + when we release a new model. + """ + + label_detection_mode = proto.Field(proto.ENUM, number=1, enum="LabelDetectionMode",) + + stationary_camera = proto.Field(proto.BOOL, number=2) + + model = proto.Field(proto.STRING, number=3) + + frame_confidence_threshold = proto.Field(proto.FLOAT, number=4) + + video_confidence_threshold = proto.Field(proto.FLOAT, number=5) + + +class ShotChangeDetectionConfig(proto.Message): + r"""Config for SHOT_CHANGE_DETECTION. + + Attributes: + model (str): + Model to use for shot change detection. + Supported values: "builtin/stable" (the default + if unset) and "builtin/latest". + """ + + model = proto.Field(proto.STRING, number=1) + + +class ObjectTrackingConfig(proto.Message): + r"""Config for OBJECT_TRACKING. + + Attributes: + model (str): + Model to use for object tracking. + Supported values: "builtin/stable" (the default + if unset) and "builtin/latest". + """ + + model = proto.Field(proto.STRING, number=1) + + +class ExplicitContentDetectionConfig(proto.Message): + r"""Config for EXPLICIT_CONTENT_DETECTION. + + Attributes: + model (str): + Model to use for explicit content detection. + Supported values: "builtin/stable" (the default + if unset) and "builtin/latest". + """ + + model = proto.Field(proto.STRING, number=1) + + +class FaceDetectionConfig(proto.Message): + r"""Config for FACE_DETECTION. + + Attributes: + model (str): + Model to use for face detection. + Supported values: "builtin/stable" (the default + if unset) and "builtin/latest". + include_bounding_boxes (bool): + Whether bounding boxes are included in the + face annotation output. + include_attributes (bool): + Whether to enable face attributes detection, such as + glasses, dark_glasses, mouth_open etc. Ignored if + 'include_bounding_boxes' is set to false. + """ + + model = proto.Field(proto.STRING, number=1) + + include_bounding_boxes = proto.Field(proto.BOOL, number=2) + + include_attributes = proto.Field(proto.BOOL, number=5) + + +class PersonDetectionConfig(proto.Message): + r"""Config for PERSON_DETECTION. + + Attributes: + include_bounding_boxes (bool): + Whether bounding boxes are included in the + person detection annotation output. + include_pose_landmarks (bool): + Whether to enable pose landmarks detection. Ignored if + 'include_bounding_boxes' is set to false. + include_attributes (bool): + Whether to enable person attributes detection, such as cloth + color (black, blue, etc), type (coat, dress, etc), pattern + (plain, floral, etc), hair, etc. Ignored if + 'include_bounding_boxes' is set to false. + """ + + include_bounding_boxes = proto.Field(proto.BOOL, number=1) + + include_pose_landmarks = proto.Field(proto.BOOL, number=2) + + include_attributes = proto.Field(proto.BOOL, number=3) + + +class TextDetectionConfig(proto.Message): + r"""Config for TEXT_DETECTION. + + Attributes: + language_hints (Sequence[str]): + Language hint can be specified if the + language to be detected is known a priori. It + can increase the accuracy of the detection. + Language hint must be language code in BCP-47 + format. + + Automatic language detection is performed if no + hint is provided. + model (str): + Model to use for text detection. + Supported values: "builtin/stable" (the default + if unset) and "builtin/latest". + """ + + language_hints = proto.RepeatedField(proto.STRING, number=1) + + model = proto.Field(proto.STRING, number=2) + + +class VideoSegment(proto.Message): + r"""Video segment. + + Attributes: + start_time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the start of the segment + (inclusive). + end_time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the end of the segment + (inclusive). + """ + + start_time_offset = proto.Field(proto.MESSAGE, number=1, message=duration.Duration,) + + end_time_offset = proto.Field(proto.MESSAGE, number=2, message=duration.Duration,) + + +class LabelSegment(proto.Message): + r"""Video segment level annotation results for label detection. + + Attributes: + segment (~.video_intelligence.VideoSegment): + Video segment where a label was detected. + confidence (float): + Confidence that the label is accurate. Range: [0, 1]. + """ + + segment = proto.Field(proto.MESSAGE, number=1, message="VideoSegment",) + + confidence = proto.Field(proto.FLOAT, number=2) + + +class LabelFrame(proto.Message): + r"""Video frame level annotation results for label detection. + + Attributes: + time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the video frame for this + location. + confidence (float): + Confidence that the label is accurate. Range: [0, 1]. + """ + + time_offset = proto.Field(proto.MESSAGE, number=1, message=duration.Duration,) + + confidence = proto.Field(proto.FLOAT, number=2) + + +class Entity(proto.Message): + r"""Detected entity from video analysis. + + Attributes: + entity_id (str): + Opaque entity ID. Some IDs may be available in `Google + Knowledge Graph Search + API `__. + description (str): + Textual description, e.g., ``Fixed-gear bicycle``. + language_code (str): + Language code for ``description`` in BCP-47 format. + """ + + entity_id = proto.Field(proto.STRING, number=1) + + description = proto.Field(proto.STRING, number=2) + + language_code = proto.Field(proto.STRING, number=3) + + +class LabelAnnotation(proto.Message): + r"""Label annotation. + + Attributes: + entity (~.video_intelligence.Entity): + Detected entity. + category_entities (Sequence[~.video_intelligence.Entity]): + Common categories for the detected entity. For example, when + the label is ``Terrier``, the category is likely ``dog``. + And in some cases there might be more than one categories + e.g., ``Terrier`` could also be a ``pet``. + segments (Sequence[~.video_intelligence.LabelSegment]): + All video segments where a label was + detected. + frames (Sequence[~.video_intelligence.LabelFrame]): + All video frames where a label was detected. + """ + + entity = proto.Field(proto.MESSAGE, number=1, message="Entity",) + + category_entities = proto.RepeatedField(proto.MESSAGE, number=2, message="Entity",) + + segments = proto.RepeatedField(proto.MESSAGE, number=3, message="LabelSegment",) + + frames = proto.RepeatedField(proto.MESSAGE, number=4, message="LabelFrame",) + + +class ExplicitContentFrame(proto.Message): + r"""Video frame level annotation results for explicit content. + + Attributes: + time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the video frame for this + location. + pornography_likelihood (~.video_intelligence.Likelihood): + Likelihood of the pornography content.. + """ + + time_offset = proto.Field(proto.MESSAGE, number=1, message=duration.Duration,) + + pornography_likelihood = proto.Field(proto.ENUM, number=2, enum="Likelihood",) + + +class ExplicitContentAnnotation(proto.Message): + r"""Explicit content annotation (based on per-frame visual + signals only). If no explicit content has been detected in a + frame, no annotations are present for that frame. + + Attributes: + frames (Sequence[~.video_intelligence.ExplicitContentFrame]): + All video frames where explicit content was + detected. + """ + + frames = proto.RepeatedField( + proto.MESSAGE, number=1, message="ExplicitContentFrame", + ) + + +class NormalizedBoundingBox(proto.Message): + r"""Normalized bounding box. The normalized vertex coordinates are + relative to the original image. Range: [0, 1]. + + Attributes: + left (float): + Left X coordinate. + top (float): + Top Y coordinate. + right (float): + Right X coordinate. + bottom (float): + Bottom Y coordinate. + """ + + left = proto.Field(proto.FLOAT, number=1) + + top = proto.Field(proto.FLOAT, number=2) + + right = proto.Field(proto.FLOAT, number=3) + + bottom = proto.Field(proto.FLOAT, number=4) + + +class TimestampedObject(proto.Message): + r"""For tracking related features. An object at time_offset with + attributes, and located with normalized_bounding_box. + + Attributes: + normalized_bounding_box (~.video_intelligence.NormalizedBoundingBox): + Normalized Bounding box in a frame, where the + object is located. + time_offset (~.duration.Duration): + Time-offset, relative to the beginning of the + video, corresponding to the video frame for this + object. + attributes (Sequence[~.video_intelligence.DetectedAttribute]): + Optional. The attributes of the object in the + bounding box. + landmarks (Sequence[~.video_intelligence.DetectedLandmark]): + Optional. The detected landmarks. + """ + + normalized_bounding_box = proto.Field( + proto.MESSAGE, number=1, message="NormalizedBoundingBox", + ) + + time_offset = proto.Field(proto.MESSAGE, number=2, message=duration.Duration,) + + attributes = proto.RepeatedField( + proto.MESSAGE, number=3, message="DetectedAttribute", + ) + + landmarks = proto.RepeatedField( + proto.MESSAGE, number=4, message="DetectedLandmark", + ) + + +class Track(proto.Message): + r"""A track of an object instance. + + Attributes: + segment (~.video_intelligence.VideoSegment): + Video segment of a track. + timestamped_objects (Sequence[~.video_intelligence.TimestampedObject]): + The object with timestamp and attributes per + frame in the track. + attributes (Sequence[~.video_intelligence.DetectedAttribute]): + Optional. Attributes in the track level. + confidence (float): + Optional. The confidence score of the tracked + object. + """ + + segment = proto.Field(proto.MESSAGE, number=1, message="VideoSegment",) + + timestamped_objects = proto.RepeatedField( + proto.MESSAGE, number=2, message="TimestampedObject", + ) + + attributes = proto.RepeatedField( + proto.MESSAGE, number=3, message="DetectedAttribute", + ) + + confidence = proto.Field(proto.FLOAT, number=4) + + +class DetectedAttribute(proto.Message): + r"""A generic detected attribute represented by name in string + format. + + Attributes: + name (str): + The name of the attribute, for example, glasses, + dark_glasses, mouth_open. A full list of supported type + names will be provided in the document. + confidence (float): + Detected attribute confidence. Range [0, 1]. + value (str): + Text value of the detection result. For + example, the value for "HairColor" can be + "black", "blonde", etc. + """ + + name = proto.Field(proto.STRING, number=1) + + confidence = proto.Field(proto.FLOAT, number=2) + + value = proto.Field(proto.STRING, number=3) + + +class Celebrity(proto.Message): + r"""Celebrity definition. + + Attributes: + name (str): + The resource name of the celebrity. Have the format + ``video-intelligence/kg-mid`` indicates a celebrity from + preloaded gallery. kg-mid is the id in Google knowledge + graph, which is unique for the celebrity. + display_name (str): + The celebrity name. + description (str): + Textual description of additional information + about the celebrity, if applicable. + """ + + name = proto.Field(proto.STRING, number=1) + + display_name = proto.Field(proto.STRING, number=2) + + description = proto.Field(proto.STRING, number=3) + + +class CelebrityTrack(proto.Message): + r"""The annotation result of a celebrity face track. + RecognizedCelebrity field could be empty if the face track does + not have any matched celebrities. + + Attributes: + celebrities (Sequence[~.video_intelligence.CelebrityTrack.RecognizedCelebrity]): + Top N match of the celebrities for the face + in this track. + face_track (~.video_intelligence.Track): + A track of a person's face. + """ + + class RecognizedCelebrity(proto.Message): + r"""The recognized celebrity with confidence score. + + Attributes: + celebrity (~.video_intelligence.Celebrity): + The recognized celebrity. + confidence (float): + Recognition confidence. Range [0, 1]. + """ + + celebrity = proto.Field(proto.MESSAGE, number=1, message="Celebrity",) + + confidence = proto.Field(proto.FLOAT, number=2) + + celebrities = proto.RepeatedField( + proto.MESSAGE, number=1, message=RecognizedCelebrity, + ) + + face_track = proto.Field(proto.MESSAGE, number=3, message="Track",) + + +class CelebrityRecognitionAnnotation(proto.Message): + r"""Celebrity recognition annotation per video. + + Attributes: + celebrity_tracks (Sequence[~.video_intelligence.CelebrityTrack]): + The tracks detected from the input video, + including recognized celebrities and other + detected faces in the video. + """ + + celebrity_tracks = proto.RepeatedField( + proto.MESSAGE, number=1, message="CelebrityTrack", + ) + + +class DetectedLandmark(proto.Message): + r"""A generic detected landmark represented by name in string + format and a 2D location. + + Attributes: + name (str): + The name of this landmark, for example, left_hand, + right_shoulder. + point (~.video_intelligence.NormalizedVertex): + The 2D point of the detected landmark using + the normalized image coordindate system. The + normalized coordinates have the range from 0 to + 1. + confidence (float): + The confidence score of the detected landmark. Range [0, 1]. + """ + + name = proto.Field(proto.STRING, number=1) + + point = proto.Field(proto.MESSAGE, number=2, message="NormalizedVertex",) + + confidence = proto.Field(proto.FLOAT, number=3) + + +class FaceDetectionAnnotation(proto.Message): + r"""Face detection annotation. + + Attributes: + tracks (Sequence[~.video_intelligence.Track]): + The face tracks with attributes. + thumbnail (bytes): + The thumbnail of a person's face. + """ + + tracks = proto.RepeatedField(proto.MESSAGE, number=3, message="Track",) + + thumbnail = proto.Field(proto.BYTES, number=4) + + +class PersonDetectionAnnotation(proto.Message): + r"""Person detection annotation per video. + + Attributes: + tracks (Sequence[~.video_intelligence.Track]): + The detected tracks of a person. + """ + + tracks = proto.RepeatedField(proto.MESSAGE, number=1, message="Track",) + + +class VideoAnnotationResults(proto.Message): + r"""Annotation results for a single video. + + Attributes: + input_uri (str): + Video file location in `Cloud + Storage `__. + segment (~.video_intelligence.VideoSegment): + Video segment on which the annotation is run. + segment_label_annotations (Sequence[~.video_intelligence.LabelAnnotation]): + Topical label annotations on video level or + user-specified segment level. There is exactly + one element for each unique label. + segment_presence_label_annotations (Sequence[~.video_intelligence.LabelAnnotation]): + Presence label annotations on video level or user-specified + segment level. There is exactly one element for each unique + label. Compared to the existing topical + ``segment_label_annotations``, this field presents more + fine-grained, segment-level labels detected in video content + and is made available only when the client sets + ``LabelDetectionConfig.model`` to "builtin/latest" in the + request. + shot_label_annotations (Sequence[~.video_intelligence.LabelAnnotation]): + Topical label annotations on shot level. + There is exactly one element for each unique + label. + shot_presence_label_annotations (Sequence[~.video_intelligence.LabelAnnotation]): + Presence label annotations on shot level. There is exactly + one element for each unique label. Compared to the existing + topical ``shot_label_annotations``, this field presents more + fine-grained, shot-level labels detected in video content + and is made available only when the client sets + ``LabelDetectionConfig.model`` to "builtin/latest" in the + request. + frame_label_annotations (Sequence[~.video_intelligence.LabelAnnotation]): + Label annotations on frame level. + There is exactly one element for each unique + label. + face_detection_annotations (Sequence[~.video_intelligence.FaceDetectionAnnotation]): + Face detection annotations. + shot_annotations (Sequence[~.video_intelligence.VideoSegment]): + Shot annotations. Each shot is represented as + a video segment. + explicit_annotation (~.video_intelligence.ExplicitContentAnnotation): + Explicit content annotation. + speech_transcriptions (Sequence[~.video_intelligence.SpeechTranscription]): + Speech transcription. + text_annotations (Sequence[~.video_intelligence.TextAnnotation]): + OCR text detection and tracking. + Annotations for list of detected text snippets. + Each will have list of frame information + associated with it. + object_annotations (Sequence[~.video_intelligence.ObjectTrackingAnnotation]): + Annotations for list of objects detected and + tracked in video. + logo_recognition_annotations (Sequence[~.video_intelligence.LogoRecognitionAnnotation]): + Annotations for list of logos detected, + tracked and recognized in video. + person_detection_annotations (Sequence[~.video_intelligence.PersonDetectionAnnotation]): + Person detection annotations. + celebrity_recognition_annotations (~.video_intelligence.CelebrityRecognitionAnnotation): + Celebrity recognition annotations. + error (~.status.Status): + If set, indicates an error. Note that for a single + ``AnnotateVideoRequest`` some videos may succeed and some + may fail. + """ + + input_uri = proto.Field(proto.STRING, number=1) + + segment = proto.Field(proto.MESSAGE, number=10, message="VideoSegment",) + + segment_label_annotations = proto.RepeatedField( + proto.MESSAGE, number=2, message="LabelAnnotation", + ) + + segment_presence_label_annotations = proto.RepeatedField( + proto.MESSAGE, number=23, message="LabelAnnotation", + ) + + shot_label_annotations = proto.RepeatedField( + proto.MESSAGE, number=3, message="LabelAnnotation", + ) + + shot_presence_label_annotations = proto.RepeatedField( + proto.MESSAGE, number=24, message="LabelAnnotation", + ) + + frame_label_annotations = proto.RepeatedField( + proto.MESSAGE, number=4, message="LabelAnnotation", + ) + + face_detection_annotations = proto.RepeatedField( + proto.MESSAGE, number=13, message="FaceDetectionAnnotation", + ) + + shot_annotations = proto.RepeatedField( + proto.MESSAGE, number=6, message="VideoSegment", + ) + + explicit_annotation = proto.Field( + proto.MESSAGE, number=7, message="ExplicitContentAnnotation", + ) + + speech_transcriptions = proto.RepeatedField( + proto.MESSAGE, number=11, message="SpeechTranscription", + ) + + text_annotations = proto.RepeatedField( + proto.MESSAGE, number=12, message="TextAnnotation", + ) + + object_annotations = proto.RepeatedField( + proto.MESSAGE, number=14, message="ObjectTrackingAnnotation", + ) + + logo_recognition_annotations = proto.RepeatedField( + proto.MESSAGE, number=19, message="LogoRecognitionAnnotation", + ) + + person_detection_annotations = proto.RepeatedField( + proto.MESSAGE, number=20, message="PersonDetectionAnnotation", + ) + + celebrity_recognition_annotations = proto.Field( + proto.MESSAGE, number=21, message="CelebrityRecognitionAnnotation", + ) + + error = proto.Field(proto.MESSAGE, number=9, message=status.Status,) + + +class AnnotateVideoResponse(proto.Message): + r"""Video annotation response. Included in the ``response`` field of the + ``Operation`` returned by the ``GetOperation`` call of the + ``google::longrunning::Operations`` service. + + Attributes: + annotation_results (Sequence[~.video_intelligence.VideoAnnotationResults]): + Annotation results for all videos specified in + ``AnnotateVideoRequest``. + """ + + annotation_results = proto.RepeatedField( + proto.MESSAGE, number=1, message="VideoAnnotationResults", + ) + + +class VideoAnnotationProgress(proto.Message): + r"""Annotation progress for a single video. + + Attributes: + input_uri (str): + Video file location in `Cloud + Storage `__. + progress_percent (int): + Approximate percentage processed thus far. + Guaranteed to be 100 when fully processed. + start_time (~.timestamp.Timestamp): + Time when the request was received. + update_time (~.timestamp.Timestamp): + Time of the most recent update. + feature (~.video_intelligence.Feature): + Specifies which feature is being tracked if + the request contains more than one feature. + segment (~.video_intelligence.VideoSegment): + Specifies which segment is being tracked if + the request contains more than one segment. + """ + + input_uri = proto.Field(proto.STRING, number=1) + + progress_percent = proto.Field(proto.INT32, number=2) + + start_time = proto.Field(proto.MESSAGE, number=3, message=timestamp.Timestamp,) + + update_time = proto.Field(proto.MESSAGE, number=4, message=timestamp.Timestamp,) + + feature = proto.Field(proto.ENUM, number=5, enum="Feature",) + + segment = proto.Field(proto.MESSAGE, number=6, message="VideoSegment",) + + +class AnnotateVideoProgress(proto.Message): + r"""Video annotation progress. Included in the ``metadata`` field of the + ``Operation`` returned by the ``GetOperation`` call of the + ``google::longrunning::Operations`` service. + + Attributes: + annotation_progress (Sequence[~.video_intelligence.VideoAnnotationProgress]): + Progress metadata for all videos specified in + ``AnnotateVideoRequest``. + """ + + annotation_progress = proto.RepeatedField( + proto.MESSAGE, number=1, message="VideoAnnotationProgress", + ) + + +class SpeechTranscriptionConfig(proto.Message): + r"""Config for SPEECH_TRANSCRIPTION. + + Attributes: + language_code (str): + Required. *Required* The language of the supplied audio as a + `BCP-47 `__ + language tag. Example: "en-US". See `Language + Support `__ + for a list of the currently supported language codes. + max_alternatives (int): + Optional. Maximum number of recognition hypotheses to be + returned. Specifically, the maximum number of + ``SpeechRecognitionAlternative`` messages within each + ``SpeechTranscription``. The server may return fewer than + ``max_alternatives``. Valid values are ``0``-``30``. A value + of ``0`` or ``1`` will return a maximum of one. If omitted, + will return a maximum of one. + filter_profanity (bool): + Optional. If set to ``true``, the server will attempt to + filter out profanities, replacing all but the initial + character in each filtered word with asterisks, e.g. "f***". + If set to ``false`` or omitted, profanities won't be + filtered out. + speech_contexts (Sequence[~.video_intelligence.SpeechContext]): + Optional. A means to provide context to + assist the speech recognition. + enable_automatic_punctuation (bool): + Optional. If 'true', adds punctuation to + recognition result hypotheses. This feature is + only available in select languages. Setting this + for requests in other languages has no effect at + all. The default 'false' value does not add + punctuation to result hypotheses. NOTE: "This is + currently offered as an experimental service, + complimentary to all users. In the future this + may be exclusively available as a premium + feature.". + audio_tracks (Sequence[int]): + Optional. For file formats, such as MXF or + MKV, supporting multiple audio tracks, specify + up to two tracks. Default: track 0. + enable_speaker_diarization (bool): + Optional. If 'true', enables speaker detection for each + recognized word in the top alternative of the recognition + result using a speaker_tag provided in the WordInfo. Note: + When this is true, we send all the words from the beginning + of the audio for the top alternative in every consecutive + response. This is done in order to improve our speaker tags + as our models learn to identify the speakers in the + conversation over time. + diarization_speaker_count (int): + Optional. If set, specifies the estimated number of speakers + in the conversation. If not set, defaults to '2'. Ignored + unless enable_speaker_diarization is set to true. + enable_word_confidence (bool): + Optional. If ``true``, the top result includes a list of + words and the confidence for those words. If ``false``, no + word-level confidence information is returned. The default + is ``false``. + """ + + language_code = proto.Field(proto.STRING, number=1) + + max_alternatives = proto.Field(proto.INT32, number=2) + + filter_profanity = proto.Field(proto.BOOL, number=3) + + speech_contexts = proto.RepeatedField( + proto.MESSAGE, number=4, message="SpeechContext", + ) + + enable_automatic_punctuation = proto.Field(proto.BOOL, number=5) + + audio_tracks = proto.RepeatedField(proto.INT32, number=6) + + enable_speaker_diarization = proto.Field(proto.BOOL, number=7) + + diarization_speaker_count = proto.Field(proto.INT32, number=8) + + enable_word_confidence = proto.Field(proto.BOOL, number=9) + + +class SpeechContext(proto.Message): + r"""Provides "hints" to the speech recognizer to favor specific + words and phrases in the results. + + Attributes: + phrases (Sequence[str]): + Optional. A list of strings containing words and phrases + "hints" so that the speech recognition is more likely to + recognize them. This can be used to improve the accuracy for + specific words and phrases, for example, if specific + commands are typically spoken by the user. This can also be + used to add additional words to the vocabulary of the + recognizer. See `usage + limits `__. + """ + + phrases = proto.RepeatedField(proto.STRING, number=1) + + +class SpeechTranscription(proto.Message): + r"""A speech recognition result corresponding to a portion of the + audio. + + Attributes: + alternatives (Sequence[~.video_intelligence.SpeechRecognitionAlternative]): + May contain one or more recognition hypotheses (up to the + maximum specified in ``max_alternatives``). These + alternatives are ordered in terms of accuracy, with the top + (first) alternative being the most probable, as ranked by + the recognizer. + language_code (str): + Output only. The + `BCP-47 `__ + language tag of the language in this result. This language + code was detected to have the most likelihood of being + spoken in the audio. + """ + + alternatives = proto.RepeatedField( + proto.MESSAGE, number=1, message="SpeechRecognitionAlternative", + ) + + language_code = proto.Field(proto.STRING, number=2) + + +class SpeechRecognitionAlternative(proto.Message): + r"""Alternative hypotheses (a.k.a. n-best list). + + Attributes: + transcript (str): + Transcript text representing the words that + the user spoke. + confidence (float): + Output only. The confidence estimate between 0.0 and 1.0. A + higher number indicates an estimated greater likelihood that + the recognized words are correct. This field is set only for + the top alternative. This field is not guaranteed to be + accurate and users should not rely on it to be always + provided. The default of 0.0 is a sentinel value indicating + ``confidence`` was not set. + words (Sequence[~.video_intelligence.WordInfo]): + Output only. A list of word-specific information for each + recognized word. Note: When ``enable_speaker_diarization`` + is set to true, you will see all the words from the + beginning of the audio. + """ + + transcript = proto.Field(proto.STRING, number=1) + + confidence = proto.Field(proto.FLOAT, number=2) + + words = proto.RepeatedField(proto.MESSAGE, number=3, message="WordInfo",) + + +class WordInfo(proto.Message): + r"""Word-specific information for recognized words. Word information is + only included in the response when certain request parameters are + set, such as ``enable_word_time_offsets``. + + Attributes: + start_time (~.duration.Duration): + Time offset relative to the beginning of the audio, and + corresponding to the start of the spoken word. This field is + only set if ``enable_word_time_offsets=true`` and only in + the top hypothesis. This is an experimental feature and the + accuracy of the time offset can vary. + end_time (~.duration.Duration): + Time offset relative to the beginning of the audio, and + corresponding to the end of the spoken word. This field is + only set if ``enable_word_time_offsets=true`` and only in + the top hypothesis. This is an experimental feature and the + accuracy of the time offset can vary. + word (str): + The word corresponding to this set of + information. + confidence (float): + Output only. The confidence estimate between 0.0 and 1.0. A + higher number indicates an estimated greater likelihood that + the recognized words are correct. This field is set only for + the top alternative. This field is not guaranteed to be + accurate and users should not rely on it to be always + provided. The default of 0.0 is a sentinel value indicating + ``confidence`` was not set. + speaker_tag (int): + Output only. A distinct integer value is assigned for every + speaker within the audio. This field specifies which one of + those speakers was detected to have spoken this word. Value + ranges from 1 up to diarization_speaker_count, and is only + set if speaker diarization is enabled. + """ + + start_time = proto.Field(proto.MESSAGE, number=1, message=duration.Duration,) + + end_time = proto.Field(proto.MESSAGE, number=2, message=duration.Duration,) + + word = proto.Field(proto.STRING, number=3) + + confidence = proto.Field(proto.FLOAT, number=4) + + speaker_tag = proto.Field(proto.INT32, number=5) + + +class NormalizedVertex(proto.Message): + r"""A vertex represents a 2D point in the image. + NOTE: the normalized vertex coordinates are relative to the + original image and range from 0 to 1. + + Attributes: + x (float): + X coordinate. + y (float): + Y coordinate. + """ + + x = proto.Field(proto.FLOAT, number=1) + + y = proto.Field(proto.FLOAT, number=2) + + +class NormalizedBoundingPoly(proto.Message): + r"""Normalized bounding polygon for text (that might not be aligned with + axis). Contains list of the corner points in clockwise order + starting from top-left corner. For example, for a rectangular + bounding box: When the text is horizontal it might look like: 0----1 + \| \| 3----2 + + When it's clockwise rotated 180 degrees around the top-left corner + it becomes: 2----3 \| \| 1----0 + + and the vertex order will still be (0, 1, 2, 3). Note that values + can be less than 0, or greater than 1 due to trignometric + calculations for location of the box. + + Attributes: + vertices (Sequence[~.video_intelligence.NormalizedVertex]): + Normalized vertices of the bounding polygon. + """ + + vertices = proto.RepeatedField(proto.MESSAGE, number=1, message="NormalizedVertex",) + + +class TextSegment(proto.Message): + r"""Video segment level annotation results for text detection. + + Attributes: + segment (~.video_intelligence.VideoSegment): + Video segment where a text snippet was + detected. + confidence (float): + Confidence for the track of detected text. It + is calculated as the highest over all frames + where OCR detected text appears. + frames (Sequence[~.video_intelligence.TextFrame]): + Information related to the frames where OCR + detected text appears. + """ + + segment = proto.Field(proto.MESSAGE, number=1, message="VideoSegment",) + + confidence = proto.Field(proto.FLOAT, number=2) + + frames = proto.RepeatedField(proto.MESSAGE, number=3, message="TextFrame",) + + +class TextFrame(proto.Message): + r"""Video frame level annotation results for text annotation + (OCR). Contains information regarding timestamp and bounding box + locations for the frames containing detected OCR text snippets. + + Attributes: + rotated_bounding_box (~.video_intelligence.NormalizedBoundingPoly): + Bounding polygon of the detected text for + this frame. + time_offset (~.duration.Duration): + Timestamp of this frame. + """ + + rotated_bounding_box = proto.Field( + proto.MESSAGE, number=1, message="NormalizedBoundingPoly", + ) + + time_offset = proto.Field(proto.MESSAGE, number=2, message=duration.Duration,) + + +class TextAnnotation(proto.Message): + r"""Annotations related to one detected OCR text snippet. This + will contain the corresponding text, confidence value, and frame + level information for each detection. + + Attributes: + text (str): + The detected text. + segments (Sequence[~.video_intelligence.TextSegment]): + All video segments where OCR detected text + appears. + """ + + text = proto.Field(proto.STRING, number=1) + + segments = proto.RepeatedField(proto.MESSAGE, number=2, message="TextSegment",) + + +class ObjectTrackingFrame(proto.Message): + r"""Video frame level annotations for object detection and + tracking. This field stores per frame location, time offset, and + confidence. + + Attributes: + normalized_bounding_box (~.video_intelligence.NormalizedBoundingBox): + The normalized bounding box location of this + object track for the frame. + time_offset (~.duration.Duration): + The timestamp of the frame in microseconds. + """ + + normalized_bounding_box = proto.Field( + proto.MESSAGE, number=1, message="NormalizedBoundingBox", + ) + + time_offset = proto.Field(proto.MESSAGE, number=2, message=duration.Duration,) + + +class ObjectTrackingAnnotation(proto.Message): + r"""Annotations corresponding to one tracked object. + + Attributes: + segment (~.video_intelligence.VideoSegment): + Non-streaming batch mode ONLY. + Each object track corresponds to one video + segment where it appears. + track_id (int): + Streaming mode ONLY. In streaming mode, we do not know the + end time of a tracked object before it is completed. Hence, + there is no VideoSegment info returned. Instead, we provide + a unique identifiable integer track_id so that the customers + can correlate the results of the ongoing + ObjectTrackAnnotation of the same track_id over time. + entity (~.video_intelligence.Entity): + Entity to specify the object category that + this track is labeled as. + confidence (float): + Object category's labeling confidence of this + track. + frames (Sequence[~.video_intelligence.ObjectTrackingFrame]): + Information corresponding to all frames where + this object track appears. Non-streaming batch + mode: it may be one or multiple + ObjectTrackingFrame messages in frames. + Streaming mode: it can only be one + ObjectTrackingFrame message in frames. + """ + + segment = proto.Field( + proto.MESSAGE, number=3, oneof="track_info", message="VideoSegment", + ) + + track_id = proto.Field(proto.INT64, number=5, oneof="track_info") + + entity = proto.Field(proto.MESSAGE, number=1, message="Entity",) + + confidence = proto.Field(proto.FLOAT, number=4) + + frames = proto.RepeatedField( + proto.MESSAGE, number=2, message="ObjectTrackingFrame", + ) + + +class LogoRecognitionAnnotation(proto.Message): + r"""Annotation corresponding to one detected, tracked and + recognized logo class. + + Attributes: + entity (~.video_intelligence.Entity): + Entity category information to specify the + logo class that all the logo tracks within this + LogoRecognitionAnnotation are recognized as. + tracks (Sequence[~.video_intelligence.Track]): + All logo tracks where the recognized logo + appears. Each track corresponds to one logo + instance appearing in consecutive frames. + segments (Sequence[~.video_intelligence.VideoSegment]): + All video segments where the recognized logo + appears. There might be multiple instances of + the same logo class appearing in one + VideoSegment. + """ + + entity = proto.Field(proto.MESSAGE, number=1, message="Entity",) + + tracks = proto.RepeatedField(proto.MESSAGE, number=2, message="Track",) + + segments = proto.RepeatedField(proto.MESSAGE, number=3, message="VideoSegment",) + + +class StreamingAnnotateVideoRequest(proto.Message): + r"""The top-level message sent by the client for the + ``StreamingAnnotateVideo`` method. Multiple + ``StreamingAnnotateVideoRequest`` messages are sent. The first + message must only contain a ``StreamingVideoConfig`` message. All + subsequent messages must only contain ``input_content`` data. + + Attributes: + video_config (~.video_intelligence.StreamingVideoConfig): + Provides information to the annotator, specifing how to + process the request. The first + ``AnnotateStreamingVideoRequest`` message must only contain + a ``video_config`` message. + input_content (bytes): + The video data to be annotated. Chunks of video data are + sequentially sent in ``StreamingAnnotateVideoRequest`` + messages. Except the initial + ``StreamingAnnotateVideoRequest`` message containing only + ``video_config``, all subsequent + ``AnnotateStreamingVideoRequest`` messages must only contain + ``input_content`` field. Note: as with all bytes fields, + protobuffers use a pure binary representation (not base64). + """ + + video_config = proto.Field( + proto.MESSAGE, + number=1, + oneof="streaming_request", + message="StreamingVideoConfig", + ) + + input_content = proto.Field(proto.BYTES, number=2, oneof="streaming_request") + + +class StreamingVideoConfig(proto.Message): + r"""Provides information to the annotator that specifies how to + process the request. + + Attributes: + shot_change_detection_config (~.video_intelligence.StreamingShotChangeDetectionConfig): + Config for STREAMING_SHOT_CHANGE_DETECTION. + label_detection_config (~.video_intelligence.StreamingLabelDetectionConfig): + Config for STREAMING_LABEL_DETECTION. + explicit_content_detection_config (~.video_intelligence.StreamingExplicitContentDetectionConfig): + Config for STREAMING_EXPLICIT_CONTENT_DETECTION. + object_tracking_config (~.video_intelligence.StreamingObjectTrackingConfig): + Config for STREAMING_OBJECT_TRACKING. + automl_action_recognition_config (~.video_intelligence.StreamingAutomlActionRecognitionConfig): + Config for STREAMING_AUTOML_ACTION_RECOGNITION. + automl_classification_config (~.video_intelligence.StreamingAutomlClassificationConfig): + Config for STREAMING_AUTOML_CLASSIFICATION. + automl_object_tracking_config (~.video_intelligence.StreamingAutomlObjectTrackingConfig): + Config for STREAMING_AUTOML_OBJECT_TRACKING. + feature (~.video_intelligence.StreamingFeature): + Requested annotation feature. + storage_config (~.video_intelligence.StreamingStorageConfig): + Streaming storage option. By default: storage + is disabled. + """ + + shot_change_detection_config = proto.Field( + proto.MESSAGE, + number=2, + oneof="streaming_config", + message="StreamingShotChangeDetectionConfig", + ) + + label_detection_config = proto.Field( + proto.MESSAGE, + number=3, + oneof="streaming_config", + message="StreamingLabelDetectionConfig", + ) + + explicit_content_detection_config = proto.Field( + proto.MESSAGE, + number=4, + oneof="streaming_config", + message="StreamingExplicitContentDetectionConfig", + ) + + object_tracking_config = proto.Field( + proto.MESSAGE, + number=5, + oneof="streaming_config", + message="StreamingObjectTrackingConfig", + ) + + automl_action_recognition_config = proto.Field( + proto.MESSAGE, + number=23, + oneof="streaming_config", + message="StreamingAutomlActionRecognitionConfig", + ) + + automl_classification_config = proto.Field( + proto.MESSAGE, + number=21, + oneof="streaming_config", + message="StreamingAutomlClassificationConfig", + ) + + automl_object_tracking_config = proto.Field( + proto.MESSAGE, + number=22, + oneof="streaming_config", + message="StreamingAutomlObjectTrackingConfig", + ) + + feature = proto.Field(proto.ENUM, number=1, enum="StreamingFeature",) + + storage_config = proto.Field( + proto.MESSAGE, number=30, message="StreamingStorageConfig", + ) + + +class StreamingAnnotateVideoResponse(proto.Message): + r"""``StreamingAnnotateVideoResponse`` is the only message returned to + the client by ``StreamingAnnotateVideo``. A series of zero or more + ``StreamingAnnotateVideoResponse`` messages are streamed back to the + client. + + Attributes: + error (~.status.Status): + If set, returns a [google.rpc.Status][google.rpc.Status] + message that specifies the error for the operation. + annotation_results (~.video_intelligence.StreamingVideoAnnotationResults): + Streaming annotation results. + annotation_results_uri (str): + Google Cloud Storage(GCS) URI that stores annotation results + of one streaming session in JSON format. It is the + annotation_result_storage_directory from the request + followed by '/cloud_project_number-session_id'. + """ + + error = proto.Field(proto.MESSAGE, number=1, message=status.Status,) + + annotation_results = proto.Field( + proto.MESSAGE, number=2, message="StreamingVideoAnnotationResults", + ) + + annotation_results_uri = proto.Field(proto.STRING, number=3) + + +class StreamingVideoAnnotationResults(proto.Message): + r"""Streaming annotation results corresponding to a portion of + the video that is currently being processed. + + Attributes: + shot_annotations (Sequence[~.video_intelligence.VideoSegment]): + Shot annotation results. Each shot is + represented as a video segment. + label_annotations (Sequence[~.video_intelligence.LabelAnnotation]): + Label annotation results. + explicit_annotation (~.video_intelligence.ExplicitContentAnnotation): + Explicit content annotation results. + object_annotations (Sequence[~.video_intelligence.ObjectTrackingAnnotation]): + Object tracking results. + """ + + shot_annotations = proto.RepeatedField( + proto.MESSAGE, number=1, message="VideoSegment", + ) + + label_annotations = proto.RepeatedField( + proto.MESSAGE, number=2, message="LabelAnnotation", + ) + + explicit_annotation = proto.Field( + proto.MESSAGE, number=3, message="ExplicitContentAnnotation", + ) + + object_annotations = proto.RepeatedField( + proto.MESSAGE, number=4, message="ObjectTrackingAnnotation", + ) + + +class StreamingShotChangeDetectionConfig(proto.Message): + r"""Config for STREAMING_SHOT_CHANGE_DETECTION.""" + + +class StreamingLabelDetectionConfig(proto.Message): + r"""Config for STREAMING_LABEL_DETECTION. + + Attributes: + stationary_camera (bool): + Whether the video has been captured from a + stationary (i.e. non-moving) camera. When set to + true, might improve detection accuracy for + moving objects. Default: false. + """ + + stationary_camera = proto.Field(proto.BOOL, number=1) + + +class StreamingExplicitContentDetectionConfig(proto.Message): + r"""Config for STREAMING_EXPLICIT_CONTENT_DETECTION.""" + + +class StreamingObjectTrackingConfig(proto.Message): + r"""Config for STREAMING_OBJECT_TRACKING.""" + + +class StreamingAutomlActionRecognitionConfig(proto.Message): + r"""Config for STREAMING_AUTOML_ACTION_RECOGNITION. + + Attributes: + model_name (str): + Resource name of AutoML model. Format: + ``projects/{project_id}/locations/{location_id}/models/{model_id}`` + """ + + model_name = proto.Field(proto.STRING, number=1) + + +class StreamingAutomlClassificationConfig(proto.Message): + r"""Config for STREAMING_AUTOML_CLASSIFICATION. + + Attributes: + model_name (str): + Resource name of AutoML model. Format: + ``projects/{project_number}/locations/{location_id}/models/{model_id}`` + """ + + model_name = proto.Field(proto.STRING, number=1) + + +class StreamingAutomlObjectTrackingConfig(proto.Message): + r"""Config for STREAMING_AUTOML_OBJECT_TRACKING. + + Attributes: + model_name (str): + Resource name of AutoML model. Format: + ``projects/{project_id}/locations/{location_id}/models/{model_id}`` + """ + + model_name = proto.Field(proto.STRING, number=1) + + +class StreamingStorageConfig(proto.Message): + r"""Config for streaming storage option. + + Attributes: + enable_storage_annotation_result (bool): + Enable streaming storage. Default: false. + annotation_result_storage_directory (str): + Cloud Storage URI to store all annotation results for one + client. Client should specify this field as the top-level + storage directory. Annotation results of different sessions + will be put into different sub-directories denoted by + project_name and session_id. All sub-directories will be + auto generated by program and will be made accessible to + client in response proto. URIs must be specified in the + following format: ``gs://bucket-id/object-id`` ``bucket-id`` + should be a valid Cloud Storage bucket created by client and + bucket permission shall also be configured properly. + ``object-id`` can be arbitrary string that make sense to + client. Other URI formats will return error and cause Cloud + Storage write failure. + """ + + enable_storage_annotation_result = proto.Field(proto.BOOL, number=1) + + annotation_result_storage_directory = proto.Field(proto.STRING, number=3) + + +__all__ = tuple(sorted(__protobuf__.manifest)) diff --git a/packages/google-cloud-videointelligence/mypy.ini b/packages/google-cloud-videointelligence/mypy.ini new file mode 100644 index 000000000000..4505b485436b --- /dev/null +++ b/packages/google-cloud-videointelligence/mypy.ini @@ -0,0 +1,3 @@ +[mypy] +python_version = 3.6 +namespace_packages = True diff --git a/packages/google-cloud-videointelligence/noxfile.py b/packages/google-cloud-videointelligence/noxfile.py index 475828814231..78d077a56bd1 100644 --- a/packages/google-cloud-videointelligence/noxfile.py +++ b/packages/google-cloud-videointelligence/noxfile.py @@ -27,8 +27,8 @@ BLACK_PATHS = ["docs", "google", "tests", "noxfile.py", "setup.py"] DEFAULT_PYTHON_VERSION = "3.8" -SYSTEM_TEST_PYTHON_VERSIONS = ["2.7", "3.8"] -UNIT_TEST_PYTHON_VERSIONS = ["2.7", "3.5", "3.6", "3.7", "3.8"] +SYSTEM_TEST_PYTHON_VERSIONS = ["3.8"] +UNIT_TEST_PYTHON_VERSIONS = ["3.6", "3.7", "3.8"] @nox.session(python=DEFAULT_PYTHON_VERSION) @@ -70,6 +70,8 @@ def lint_setup_py(session): def default(session): # Install all test dependencies, then install this package in-place. + session.install("asyncmock", "pytest-asyncio") + session.install( "mock", "pytest", "pytest-cov", ) @@ -151,7 +153,7 @@ def docs(session): """Build the docs for this library.""" session.install("-e", ".") - session.install("sphinx<=3.0.0", "alabaster", "recommonmark") + session.install("sphinx", "alabaster", "recommonmark") shutil.rmtree(os.path.join("docs", "_build"), ignore_errors=True) session.run( diff --git a/packages/google-cloud-videointelligence/samples/analyze/analyze.py b/packages/google-cloud-videointelligence/samples/analyze/analyze.py index fcfa535c893f..e5e650a8daa9 100644 --- a/packages/google-cloud-videointelligence/samples/analyze/analyze.py +++ b/packages/google-cloud-videointelligence/samples/analyze/analyze.py @@ -35,16 +35,17 @@ import io from google.cloud import videointelligence -from google.cloud.videointelligence import enums def analyze_explicit_content(path): # [START video_analyze_explicit_content] """ Detects explicit content from the GCS path to a video. """ video_client = videointelligence.VideoIntelligenceServiceClient() - features = [videointelligence.enums.Feature.EXPLICIT_CONTENT_DETECTION] + features = [videointelligence.Feature.EXPLICIT_CONTENT_DETECTION] - operation = video_client.annotate_video(input_uri=path, features=features) + operation = video_client.annotate_video( + request={"features": features, "input_uri": path} + ) print("\nProcessing video for explicit content annotations:") result = operation.result(timeout=90) @@ -52,8 +53,8 @@ def analyze_explicit_content(path): # Retrieve first result because a single video was processed for frame in result.annotation_results[0].explicit_annotation.frames: - likelihood = enums.Likelihood(frame.pornography_likelihood) - frame_time = frame.time_offset.seconds + frame.time_offset.nanos / 1e9 + likelihood = videointelligence.Likelihood(frame.pornography_likelihood) + frame_time = frame.time_offset.seconds + frame.time_offset.microseconds / 1e6 print("Time: {}s".format(frame_time)) print("\tpornography: {}".format(likelihood.name)) # [END video_analyze_explicit_content] @@ -63,14 +64,14 @@ def analyze_labels(path): # [START video_analyze_labels_gcs] """ Detects labels given a GCS path. """ video_client = videointelligence.VideoIntelligenceServiceClient() - features = [videointelligence.enums.Feature.LABEL_DETECTION] + features = [videointelligence.Feature.LABEL_DETECTION] - mode = videointelligence.enums.LabelDetectionMode.SHOT_AND_FRAME_MODE - config = videointelligence.types.LabelDetectionConfig(label_detection_mode=mode) - context = videointelligence.types.VideoContext(label_detection_config=config) + mode = videointelligence.LabelDetectionMode.SHOT_AND_FRAME_MODE + config = videointelligence.LabelDetectionConfig(label_detection_mode=mode) + context = videointelligence.VideoContext(label_detection_config=config) operation = video_client.annotate_video( - input_uri=path, features=features, video_context=context + request={"features": features, "input_uri": path, "video_context": context} ) print("\nProcessing video for label annotations:") @@ -89,11 +90,11 @@ def analyze_labels(path): for i, segment in enumerate(segment_label.segments): start_time = ( segment.segment.start_time_offset.seconds - + segment.segment.start_time_offset.nanos / 1e9 + + segment.segment.start_time_offset.microseconds / 1e6 ) end_time = ( segment.segment.end_time_offset.seconds - + segment.segment.end_time_offset.nanos / 1e9 + + segment.segment.end_time_offset.microseconds / 1e6 ) positions = "{}s to {}s".format(start_time, end_time) confidence = segment.confidence @@ -113,11 +114,11 @@ def analyze_labels(path): for i, shot in enumerate(shot_label.segments): start_time = ( shot.segment.start_time_offset.seconds - + shot.segment.start_time_offset.nanos / 1e9 + + shot.segment.start_time_offset.microseconds / 1e6 ) end_time = ( shot.segment.end_time_offset.seconds - + shot.segment.end_time_offset.nanos / 1e9 + + shot.segment.end_time_offset.microseconds / 1e6 ) positions = "{}s to {}s".format(start_time, end_time) confidence = shot.confidence @@ -137,7 +138,7 @@ def analyze_labels(path): # Each frame_label_annotation has many frames, # here we print information only about the first frame. frame = frame_label.frames[0] - time_offset = frame.time_offset.seconds + frame.time_offset.nanos / 1e9 + time_offset = frame.time_offset.seconds + frame.time_offset.microseconds / 1e6 print("\tFirst frame time offset: {}s".format(time_offset)) print("\tFirst frame confidence: {}".format(frame.confidence)) print("\n") @@ -148,13 +149,13 @@ def analyze_labels_file(path): # [START video_analyze_labels] """Detect labels given a file path.""" video_client = videointelligence.VideoIntelligenceServiceClient() - features = [videointelligence.enums.Feature.LABEL_DETECTION] + features = [videointelligence.Feature.LABEL_DETECTION] with io.open(path, "rb") as movie: input_content = movie.read() operation = video_client.annotate_video( - features=features, input_content=input_content + request={"features": features, "input_content": input_content} ) print("\nProcessing video for label annotations:") @@ -173,11 +174,11 @@ def analyze_labels_file(path): for i, segment in enumerate(segment_label.segments): start_time = ( segment.segment.start_time_offset.seconds - + segment.segment.start_time_offset.nanos / 1e9 + + segment.segment.start_time_offset.microseconds / 1e6 ) end_time = ( segment.segment.end_time_offset.seconds - + segment.segment.end_time_offset.nanos / 1e9 + + segment.segment.end_time_offset.microseconds / 1e6 ) positions = "{}s to {}s".format(start_time, end_time) confidence = segment.confidence @@ -197,11 +198,11 @@ def analyze_labels_file(path): for i, shot in enumerate(shot_label.segments): start_time = ( shot.segment.start_time_offset.seconds - + shot.segment.start_time_offset.nanos / 1e9 + + shot.segment.start_time_offset.microseconds / 1e6 ) end_time = ( shot.segment.end_time_offset.seconds - + shot.segment.end_time_offset.nanos / 1e9 + + shot.segment.end_time_offset.microseconds / 1e6 ) positions = "{}s to {}s".format(start_time, end_time) confidence = shot.confidence @@ -221,7 +222,7 @@ def analyze_labels_file(path): # Each frame_label_annotation has many frames, # here we print information only about the first frame. frame = frame_label.frames[0] - time_offset = frame.time_offset.seconds + frame.time_offset.nanos / 1e9 + time_offset = frame.time_offset.seconds + frame.time_offset.microseconds / 1e6 print("\tFirst frame time offset: {}s".format(time_offset)) print("\tFirst frame confidence: {}".format(frame.confidence)) print("\n") @@ -232,8 +233,10 @@ def analyze_shots(path): # [START video_analyze_shots] """ Detects camera shot changes. """ video_client = videointelligence.VideoIntelligenceServiceClient() - features = [videointelligence.enums.Feature.SHOT_CHANGE_DETECTION] - operation = video_client.annotate_video(input_uri=path, features=features) + features = [videointelligence.Feature.SHOT_CHANGE_DETECTION] + operation = video_client.annotate_video( + request={"features": features, "input_uri": path} + ) print("\nProcessing video for shot change annotations:") result = operation.result(timeout=90) @@ -241,8 +244,12 @@ def analyze_shots(path): # first result is retrieved because a single video was processed for i, shot in enumerate(result.annotation_results[0].shot_annotations): - start_time = shot.start_time_offset.seconds + shot.start_time_offset.nanos / 1e9 - end_time = shot.end_time_offset.seconds + shot.end_time_offset.nanos / 1e9 + start_time = ( + shot.start_time_offset.seconds + shot.start_time_offset.microseconds / 1e6 + ) + end_time = ( + shot.end_time_offset.seconds + shot.end_time_offset.microseconds / 1e6 + ) print("\tShot {}: {} to {}".format(i, start_time, end_time)) # [END video_analyze_shots] @@ -253,17 +260,19 @@ def speech_transcription(path): from google.cloud import videointelligence video_client = videointelligence.VideoIntelligenceServiceClient() - features = [videointelligence.enums.Feature.SPEECH_TRANSCRIPTION] + features = [videointelligence.Feature.SPEECH_TRANSCRIPTION] - config = videointelligence.types.SpeechTranscriptionConfig( + config = videointelligence.SpeechTranscriptionConfig( language_code="en-US", enable_automatic_punctuation=True ) - video_context = videointelligence.types.VideoContext( - speech_transcription_config=config - ) + video_context = videointelligence.VideoContext(speech_transcription_config=config) operation = video_client.annotate_video( - input_uri=path, features=features, video_context=video_context + request={ + "features": features, + "input_uri": path, + "video_context": video_context, + } ) print("\nProcessing video for speech transcription.") @@ -292,8 +301,8 @@ def speech_transcription(path): end_time = word_info.end_time print( "\t{}s - {}s: {}".format( - start_time.seconds + start_time.nanos * 1e-9, - end_time.seconds + end_time.nanos * 1e-9, + start_time.seconds + start_time.microseconds * 1e-6, + end_time.seconds + end_time.microseconds * 1e-6, word, ) ) @@ -306,9 +315,11 @@ def video_detect_text_gcs(input_uri): from google.cloud import videointelligence video_client = videointelligence.VideoIntelligenceServiceClient() - features = [videointelligence.enums.Feature.TEXT_DETECTION] + features = [videointelligence.Feature.TEXT_DETECTION] - operation = video_client.annotate_video(input_uri=input_uri, features=features) + operation = video_client.annotate_video( + request={"features": features, "input_uri": input_uri} + ) print("\nProcessing video for text detection.") result = operation.result(timeout=600) @@ -325,8 +336,8 @@ def video_detect_text_gcs(input_uri): end_time = text_segment.segment.end_time_offset print( "start_time: {}, end_time: {}".format( - start_time.seconds + start_time.nanos * 1e-9, - end_time.seconds + end_time.nanos * 1e-9, + start_time.seconds + start_time.microseconds * 1e-6, + end_time.seconds + end_time.microseconds * 1e-6, ) ) @@ -337,7 +348,7 @@ def video_detect_text_gcs(input_uri): time_offset = frame.time_offset print( "Time offset for the first frame: {}".format( - time_offset.seconds + time_offset.nanos * 1e-9 + time_offset.seconds + time_offset.microseconds * 1e-6 ) ) print("Rotated Bounding Box Vertices:") @@ -352,16 +363,18 @@ def video_detect_text(path): from google.cloud import videointelligence video_client = videointelligence.VideoIntelligenceServiceClient() - features = [videointelligence.enums.Feature.TEXT_DETECTION] - video_context = videointelligence.types.VideoContext() + features = [videointelligence.Feature.TEXT_DETECTION] + video_context = videointelligence.VideoContext() with io.open(path, "rb") as file: input_content = file.read() operation = video_client.annotate_video( - input_content=input_content, # the bytes of the video file - features=features, - video_context=video_context, + request={ + "features": features, + "input_content": input_content, + "video_context": video_context, + } ) print("\nProcessing video for text detection.") @@ -379,8 +392,8 @@ def video_detect_text(path): end_time = text_segment.segment.end_time_offset print( "start_time: {}, end_time: {}".format( - start_time.seconds + start_time.nanos * 1e-9, - end_time.seconds + end_time.nanos * 1e-9, + start_time.seconds + start_time.microseconds * 1e-6, + end_time.seconds + end_time.microseconds * 1e-6, ) ) @@ -391,7 +404,7 @@ def video_detect_text(path): time_offset = frame.time_offset print( "Time offset for the first frame: {}".format( - time_offset.seconds + time_offset.nanos * 1e-9 + time_offset.seconds + time_offset.microseconds * 1e-6 ) ) print("Rotated Bounding Box Vertices:") @@ -406,8 +419,10 @@ def track_objects_gcs(gcs_uri): from google.cloud import videointelligence video_client = videointelligence.VideoIntelligenceServiceClient() - features = [videointelligence.enums.Feature.OBJECT_TRACKING] - operation = video_client.annotate_video(input_uri=gcs_uri, features=features) + features = [videointelligence.Feature.OBJECT_TRACKING] + operation = video_client.annotate_video( + request={"features": features, "input_uri": gcs_uri} + ) print("\nProcessing video for object annotations.") result = operation.result(timeout=300) @@ -424,9 +439,9 @@ def track_objects_gcs(gcs_uri): print( "Segment: {}s to {}s".format( object_annotation.segment.start_time_offset.seconds - + object_annotation.segment.start_time_offset.nanos / 1e9, + + object_annotation.segment.start_time_offset.microseconds / 1e6, object_annotation.segment.end_time_offset.seconds - + object_annotation.segment.end_time_offset.nanos / 1e9, + + object_annotation.segment.end_time_offset.microseconds / 1e6, ) ) @@ -437,7 +452,7 @@ def track_objects_gcs(gcs_uri): box = frame.normalized_bounding_box print( "Time offset of the first frame: {}s".format( - frame.time_offset.seconds + frame.time_offset.nanos / 1e9 + frame.time_offset.seconds + frame.time_offset.microseconds / 1e6 ) ) print("Bounding box position:") @@ -455,13 +470,13 @@ def track_objects(path): from google.cloud import videointelligence video_client = videointelligence.VideoIntelligenceServiceClient() - features = [videointelligence.enums.Feature.OBJECT_TRACKING] + features = [videointelligence.Feature.OBJECT_TRACKING] with io.open(path, "rb") as file: input_content = file.read() operation = video_client.annotate_video( - input_content=input_content, features=features + request={"features": features, "input_content": input_content} ) print("\nProcessing video for object annotations.") @@ -480,9 +495,9 @@ def track_objects(path): print( "Segment: {}s to {}s".format( object_annotation.segment.start_time_offset.seconds - + object_annotation.segment.start_time_offset.nanos / 1e9, + + object_annotation.segment.start_time_offset.microseconds / 1e6, object_annotation.segment.end_time_offset.seconds - + object_annotation.segment.end_time_offset.nanos / 1e9, + + object_annotation.segment.end_time_offset.microseconds / 1e6, ) ) @@ -493,7 +508,7 @@ def track_objects(path): box = frame.normalized_bounding_box print( "Time offset of the first frame: {}s".format( - frame.time_offset.seconds + frame.time_offset.nanos / 1e9 + frame.time_offset.seconds + frame.time_offset.microseconds / 1e6 ) ) print("Bounding box position:") diff --git a/packages/google-cloud-videointelligence/samples/analyze/analyze_test.py b/packages/google-cloud-videointelligence/samples/analyze/analyze_test.py index 6623fd326983..2da6cb760f1b 100644 --- a/packages/google-cloud-videointelligence/samples/analyze/analyze_test.py +++ b/packages/google-cloud-videointelligence/samples/analyze/analyze_test.py @@ -74,7 +74,7 @@ def test_speech_transcription(capsys): def test_detect_text_gcs(capsys): analyze.video_detect_text_gcs("gs://cloud-samples-data/video/googlework_tiny.mp4") out, _ = capsys.readouterr() - assert 'Text' in out + assert "Text" in out # Flaky timeout @@ -82,7 +82,7 @@ def test_detect_text_gcs(capsys): def test_detect_text(capsys): analyze.video_detect_text("resources/googlework_tiny.mp4") out, _ = capsys.readouterr() - assert 'Text' in out + assert "Text" in out # Flaky timeout diff --git a/packages/google-cloud-videointelligence/samples/analyze/beta_snippets.py b/packages/google-cloud-videointelligence/samples/analyze/beta_snippets.py index 33d221ce4b90..0e484f8a6293 100644 --- a/packages/google-cloud-videointelligence/samples/analyze/beta_snippets.py +++ b/packages/google-cloud-videointelligence/samples/analyze/beta_snippets.py @@ -52,17 +52,19 @@ def speech_transcription(input_uri): video_client = videointelligence.VideoIntelligenceServiceClient() - features = [videointelligence.enums.Feature.SPEECH_TRANSCRIPTION] + features = [videointelligence.Feature.SPEECH_TRANSCRIPTION] - config = videointelligence.types.SpeechTranscriptionConfig( + config = videointelligence.SpeechTranscriptionConfig( language_code="en-US", enable_automatic_punctuation=True ) - video_context = videointelligence.types.VideoContext( - speech_transcription_config=config - ) + video_context = videointelligence.VideoContext(speech_transcription_config=config) operation = video_client.annotate_video( - input_uri=input_uri, features=features, video_context=video_context + request={ + "features": features, + "input_uri": input_uri, + "video_context": video_context, + } ) print("\nProcessing video for speech transcription.") @@ -91,8 +93,8 @@ def speech_transcription(input_uri): end_time = word_info.end_time print( "\t{}s - {}s: {}".format( - start_time.seconds + start_time.nanos * 1e-9, - end_time.seconds + end_time.nanos * 1e-9, + start_time.seconds + start_time.microseconds * 1e-6, + end_time.seconds + end_time.microseconds * 1e-6, word, ) ) @@ -105,9 +107,11 @@ def video_detect_text_gcs(input_uri): from google.cloud import videointelligence_v1p2beta1 as videointelligence video_client = videointelligence.VideoIntelligenceServiceClient() - features = [videointelligence.enums.Feature.TEXT_DETECTION] + features = [videointelligence.Feature.TEXT_DETECTION] - operation = video_client.annotate_video(input_uri=input_uri, features=features) + operation = video_client.annotate_video( + request={"features": features, "input_uri": input_uri} + ) print("\nProcessing video for text detection.") result = operation.result(timeout=300) @@ -125,8 +129,8 @@ def video_detect_text_gcs(input_uri): end_time = text_segment.segment.end_time_offset print( "start_time: {}, end_time: {}".format( - start_time.seconds + start_time.nanos * 1e-9, - end_time.seconds + end_time.nanos * 1e-9, + start_time.seconds + start_time.microseconds * 1e-6, + end_time.seconds + end_time.microseconds * 1e-6, ) ) @@ -137,7 +141,7 @@ def video_detect_text_gcs(input_uri): time_offset = frame.time_offset print( "Time offset for the first frame: {}".format( - time_offset.seconds + time_offset.nanos * 1e-9 + time_offset.seconds + time_offset.microseconds * 1e-6 ) ) print("Rotated Bounding Box Vertices:") @@ -153,16 +157,18 @@ def video_detect_text(path): from google.cloud import videointelligence_v1p2beta1 as videointelligence video_client = videointelligence.VideoIntelligenceServiceClient() - features = [videointelligence.enums.Feature.TEXT_DETECTION] - video_context = videointelligence.types.VideoContext() + features = [videointelligence.Feature.TEXT_DETECTION] + video_context = videointelligence.VideoContext() with io.open(path, "rb") as file: input_content = file.read() operation = video_client.annotate_video( - input_content=input_content, # the bytes of the video file - features=features, - video_context=video_context, + request={ + "features": features, + "input_content": input_content, + "video_context": video_context, + } ) print("\nProcessing video for text detection.") @@ -181,8 +187,8 @@ def video_detect_text(path): end_time = text_segment.segment.end_time_offset print( "start_time: {}, end_time: {}".format( - start_time.seconds + start_time.nanos * 1e-9, - end_time.seconds + end_time.nanos * 1e-9, + start_time.seconds + start_time.microseconds * 1e-6, + end_time.seconds + end_time.microseconds * 1e-6, ) ) @@ -193,7 +199,7 @@ def video_detect_text(path): time_offset = frame.time_offset print( "Time offset for the first frame: {}".format( - time_offset.seconds + time_offset.nanos * 1e-9 + time_offset.seconds + time_offset.microseconds * 1e-6 ) ) print("Rotated Bounding Box Vertices:") @@ -211,9 +217,13 @@ def track_objects_gcs(gcs_uri): # It is recommended to use location_id as 'us-east1' for the best latency # due to different types of processors used in this region and others. video_client = videointelligence.VideoIntelligenceServiceClient() - features = [videointelligence.enums.Feature.OBJECT_TRACKING] + features = [videointelligence.Feature.OBJECT_TRACKING] operation = video_client.annotate_video( - input_uri=gcs_uri, features=features, location_id="us-east1" + request={ + "features": features, + "input_uri": gcs_uri, + "location_id": "us-east1", + } ) print("\nProcessing video for object annotations.") @@ -233,9 +243,9 @@ def track_objects_gcs(gcs_uri): print( "Segment: {}s to {}s".format( object_annotation.segment.start_time_offset.seconds - + object_annotation.segment.start_time_offset.nanos / 1e9, + + object_annotation.segment.start_time_offset.microseconds / 1e6, object_annotation.segment.end_time_offset.seconds - + object_annotation.segment.end_time_offset.nanos / 1e9, + + object_annotation.segment.end_time_offset.microseconds / 1e6, ) ) @@ -246,7 +256,7 @@ def track_objects_gcs(gcs_uri): box = frame.normalized_bounding_box print( "Time offset of the first frame: {}s".format( - frame.time_offset.seconds + frame.time_offset.nanos / 1e9 + frame.time_offset.seconds + frame.time_offset.microseconds / 1e6 ) ) print("Bounding box position:") @@ -265,7 +275,7 @@ def track_objects(path): from google.cloud import videointelligence_v1p2beta1 as videointelligence video_client = videointelligence.VideoIntelligenceServiceClient() - features = [videointelligence.enums.Feature.OBJECT_TRACKING] + features = [videointelligence.Feature.OBJECT_TRACKING] with io.open(path, "rb") as file: input_content = file.read() @@ -273,7 +283,11 @@ def track_objects(path): # It is recommended to use location_id as 'us-east1' for the best latency # due to different types of processors used in this region and others. operation = video_client.annotate_video( - input_content=input_content, features=features, location_id="us-east1" + request={ + "features": features, + "input_content": input_content, + "location_id": "us-east1", + } ) print("\nProcessing video for object annotations.") @@ -293,9 +307,9 @@ def track_objects(path): print( "Segment: {}s to {}s".format( object_annotation.segment.start_time_offset.seconds - + object_annotation.segment.start_time_offset.nanos / 1e9, + + object_annotation.segment.start_time_offset.microseconds / 1e6, object_annotation.segment.end_time_offset.seconds - + object_annotation.segment.end_time_offset.nanos / 1e9, + + object_annotation.segment.end_time_offset.microseconds / 1e6, ) ) @@ -306,7 +320,7 @@ def track_objects(path): box = frame.normalized_bounding_box print( "Time offset of the first frame: {}s".format( - frame.time_offset.seconds + frame.time_offset.nanos / 1e9 + frame.time_offset.seconds + frame.time_offset.microseconds / 1e6 ) ) print("Bounding box position:") @@ -328,12 +342,12 @@ def detect_labels_streaming(path): client = videointelligence.StreamingVideoIntelligenceServiceClient() # Set streaming config. - config = videointelligence.types.StreamingVideoConfig( - feature=(videointelligence.enums.StreamingFeature.STREAMING_LABEL_DETECTION) + config = videointelligence.StreamingVideoConfig( + feature=(videointelligence.StreamingFeature.STREAMING_LABEL_DETECTION) ) # config_request should be the first in the stream of requests. - config_request = videointelligence.types.StreamingAnnotateVideoRequest( + config_request = videointelligence.StreamingAnnotateVideoRequest( video_config=config ) @@ -352,9 +366,7 @@ def detect_labels_streaming(path): def stream_generator(): yield config_request for chunk in stream: - yield videointelligence.types.StreamingAnnotateVideoRequest( - input_content=chunk - ) + yield videointelligence.StreamingAnnotateVideoRequest(input_content=chunk) requests = stream_generator() @@ -380,7 +392,9 @@ def stream_generator(): for annotation in label_annotations: # Each annotation has one frame, which has a timeoffset. frame = annotation.frames[0] - time_offset = frame.time_offset.seconds + frame.time_offset.nanos / 1e9 + time_offset = ( + frame.time_offset.seconds + frame.time_offset.microseconds / 1e6 + ) description = annotation.entity.description confidence = annotation.frames[0].confidence @@ -400,14 +414,12 @@ def detect_shot_change_streaming(path): client = videointelligence.StreamingVideoIntelligenceServiceClient() # Set streaming config. - config = videointelligence.types.StreamingVideoConfig( - feature=( - videointelligence.enums.StreamingFeature.STREAMING_SHOT_CHANGE_DETECTION - ) + config = videointelligence.StreamingVideoConfig( + feature=(videointelligence.StreamingFeature.STREAMING_SHOT_CHANGE_DETECTION) ) # config_request should be the first in the stream of requests. - config_request = videointelligence.types.StreamingAnnotateVideoRequest( + config_request = videointelligence.StreamingAnnotateVideoRequest( video_config=config ) @@ -426,9 +438,7 @@ def detect_shot_change_streaming(path): def stream_generator(): yield config_request for chunk in stream: - yield videointelligence.types.StreamingAnnotateVideoRequest( - input_content=chunk - ) + yield videointelligence.StreamingAnnotateVideoRequest(input_content=chunk) requests = stream_generator() @@ -448,11 +458,11 @@ def stream_generator(): for annotation in response.annotation_results.shot_annotations: start = ( annotation.start_time_offset.seconds - + annotation.start_time_offset.nanos / 1e9 + + annotation.start_time_offset.microseconds / 1e6 ) end = ( annotation.end_time_offset.seconds - + annotation.end_time_offset.nanos / 1e9 + + annotation.end_time_offset.microseconds / 1e6 ) print("Shot: {}s to {}s".format(start, end)) @@ -468,12 +478,12 @@ def track_objects_streaming(path): client = videointelligence.StreamingVideoIntelligenceServiceClient() # Set streaming config. - config = videointelligence.types.StreamingVideoConfig( - feature=(videointelligence.enums.StreamingFeature.STREAMING_OBJECT_TRACKING) + config = videointelligence.StreamingVideoConfig( + feature=(videointelligence.StreamingFeature.STREAMING_OBJECT_TRACKING) ) # config_request should be the first in the stream of requests. - config_request = videointelligence.types.StreamingAnnotateVideoRequest( + config_request = videointelligence.StreamingAnnotateVideoRequest( video_config=config ) @@ -492,9 +502,7 @@ def track_objects_streaming(path): def stream_generator(): yield config_request for chunk in stream: - yield videointelligence.types.StreamingAnnotateVideoRequest( - input_content=chunk - ) + yield videointelligence.StreamingAnnotateVideoRequest(input_content=chunk) requests = stream_generator() @@ -520,7 +528,9 @@ def stream_generator(): for annotation in object_annotations: # Each annotation has one frame, which has a timeoffset. frame = annotation.frames[0] - time_offset = frame.time_offset.seconds + frame.time_offset.nanos / 1e9 + time_offset = ( + frame.time_offset.seconds + frame.time_offset.microseconds / 1e6 + ) description = annotation.entity.description confidence = annotation.confidence @@ -557,14 +567,14 @@ def detect_explicit_content_streaming(path): client = videointelligence.StreamingVideoIntelligenceServiceClient() # Set streaming config. - config = videointelligence.types.StreamingVideoConfig( + config = videointelligence.StreamingVideoConfig( feature=( - videointelligence.enums.StreamingFeature.STREAMING_EXPLICIT_CONTENT_DETECTION + videointelligence.StreamingFeature.STREAMING_EXPLICIT_CONTENT_DETECTION ) ) # config_request should be the first in the stream of requests. - config_request = videointelligence.types.StreamingAnnotateVideoRequest( + config_request = videointelligence.StreamingAnnotateVideoRequest( video_config=config ) @@ -583,9 +593,7 @@ def detect_explicit_content_streaming(path): def stream_generator(): yield config_request for chunk in stream: - yield videointelligence.types.StreamingAnnotateVideoRequest( - input_content=chunk - ) + yield videointelligence.StreamingAnnotateVideoRequest(input_content=chunk) requests = stream_generator() @@ -603,8 +611,10 @@ def stream_generator(): break for frame in response.annotation_results.explicit_annotation.frames: - time_offset = frame.time_offset.seconds + frame.time_offset.nanos / 1e9 - pornography_likelihood = videointelligence.enums.Likelihood( + time_offset = ( + frame.time_offset.seconds + frame.time_offset.microseconds / 1e6 + ) + pornography_likelihood = videointelligence.Likelihood( frame.pornography_likelihood ) @@ -624,19 +634,19 @@ def annotation_to_storage_streaming(path, output_uri): # Set streaming config specifying the output_uri. # The output_uri is the prefix of the actual output files. - storage_config = videointelligence.types.StreamingStorageConfig( + storage_config = videointelligence.StreamingStorageConfig( enable_storage_annotation_result=True, annotation_result_storage_directory=output_uri, ) # Here we use label detection as an example. # All features support output to GCS. - config = videointelligence.types.StreamingVideoConfig( - feature=(videointelligence.enums.StreamingFeature.STREAMING_LABEL_DETECTION), + config = videointelligence.StreamingVideoConfig( + feature=(videointelligence.StreamingFeature.STREAMING_LABEL_DETECTION), storage_config=storage_config, ) # config_request should be the first in the stream of requests. - config_request = videointelligence.types.StreamingAnnotateVideoRequest( + config_request = videointelligence.StreamingAnnotateVideoRequest( video_config=config ) @@ -655,9 +665,7 @@ def annotation_to_storage_streaming(path, output_uri): def stream_generator(): yield config_request for chunk in stream: - yield videointelligence.types.StreamingAnnotateVideoRequest( - input_content=chunk - ) + yield videointelligence.StreamingAnnotateVideoRequest(input_content=chunk) requests = stream_generator() @@ -682,7 +690,6 @@ def streaming_automl_classification(path, project_id, model_id): import io from google.cloud import videointelligence_v1p3beta1 as videointelligence - from google.cloud.videointelligence_v1p3beta1 import enums # path = 'path_to_file' # project_id = 'gcp_project_id' @@ -695,17 +702,17 @@ def streaming_automl_classification(path, project_id, model_id): ) # Here we use classification as an example. - automl_config = videointelligence.types.StreamingAutomlClassificationConfig( + automl_config = videointelligence.StreamingAutomlClassificationConfig( model_name=model_path ) - video_config = videointelligence.types.StreamingVideoConfig( - feature=enums.StreamingFeature.STREAMING_AUTOML_CLASSIFICATION, + video_config = videointelligence.StreamingVideoConfig( + feature=videointelligence.StreamingFeature.STREAMING_AUTOML_CLASSIFICATION, automl_classification_config=automl_config, ) # config_request should be the first in the stream of requests. - config_request = videointelligence.types.StreamingAnnotateVideoRequest( + config_request = videointelligence.StreamingAnnotateVideoRequest( video_config=video_config ) @@ -727,9 +734,7 @@ def streaming_automl_classification(path, project_id, model_id): def stream_generator(): yield config_request for chunk in stream: - yield videointelligence.types.StreamingAnnotateVideoRequest( - input_content=chunk - ) + yield videointelligence.StreamingAnnotateVideoRequest(input_content=chunk) requests = stream_generator() diff --git a/packages/google-cloud-videointelligence/samples/analyze/beta_snippets_test.py b/packages/google-cloud-videointelligence/samples/analyze/beta_snippets_test.py index bcb6c7bc489e..1c9da43acf12 100644 --- a/packages/google-cloud-videointelligence/samples/analyze/beta_snippets_test.py +++ b/packages/google-cloud-videointelligence/samples/analyze/beta_snippets_test.py @@ -15,13 +15,13 @@ # limitations under the License. import os +from urllib.request import urlopen import uuid import backoff from google.api_core.exceptions import Conflict from google.cloud import storage import pytest -from six.moves.urllib.request import urlopen import beta_snippets @@ -55,7 +55,7 @@ def video_path(tmpdir_factory): @pytest.fixture(scope="function") def bucket(): # Create a temporaty bucket to store annotation output. - bucket_name = f'tmp-{uuid.uuid4().hex}' + bucket_name = f"tmp-{uuid.uuid4().hex}" storage_client = storage.Client() bucket = storage_client.create_bucket(bucket_name) @@ -128,7 +128,7 @@ def test_detect_text(capsys): in_file = "./resources/googlework_tiny.mp4" beta_snippets.video_detect_text(in_file) out, _ = capsys.readouterr() - assert 'Text' in out + assert "Text" in out # Flaky timeout @@ -137,7 +137,7 @@ def test_detect_text_gcs(capsys): in_file = "gs://python-docs-samples-tests/video/googlework_tiny.mp4" beta_snippets.video_detect_text_gcs(in_file) out, _ = capsys.readouterr() - assert 'Text' in out + assert "Text" in out # Flaky InvalidArgument diff --git a/packages/google-cloud-videointelligence/samples/analyze/noxfile.py b/packages/google-cloud-videointelligence/samples/analyze/noxfile.py index b90eef00f2d9..ab2c49227c3b 100644 --- a/packages/google-cloud-videointelligence/samples/analyze/noxfile.py +++ b/packages/google-cloud-videointelligence/samples/analyze/noxfile.py @@ -37,28 +37,25 @@ TEST_CONFIG = { # You can opt out from the test for specific Python versions. - 'ignored_versions': ["2.7"], - + "ignored_versions": ["2.7"], # Old samples are opted out of enforcing Python type hints # All new samples should feature them - 'enforce_type_hints': False, - + "enforce_type_hints": False, # An envvar key for determining the project id to use. Change it # to 'BUILD_SPECIFIC_GCLOUD_PROJECT' if you want to opt in using a # build specific Cloud project. You can also use your own string # to use your own Cloud project. - 'gcloud_project_env': 'GOOGLE_CLOUD_PROJECT', + "gcloud_project_env": "GOOGLE_CLOUD_PROJECT", # 'gcloud_project_env': 'BUILD_SPECIFIC_GCLOUD_PROJECT', - # A dictionary you want to inject into your test. Don't put any # secrets here. These values will override predefined values. - 'envs': {}, + "envs": {}, } try: # Ensure we can import noxfile_config in the project's directory. - sys.path.append('.') + sys.path.append(".") from noxfile_config import TEST_CONFIG_OVERRIDE except ImportError as e: print("No user noxfile_config found: detail: {}".format(e)) @@ -73,12 +70,12 @@ def get_pytest_env_vars(): ret = {} # Override the GCLOUD_PROJECT and the alias. - env_key = TEST_CONFIG['gcloud_project_env'] + env_key = TEST_CONFIG["gcloud_project_env"] # This should error out if not set. - ret['GOOGLE_CLOUD_PROJECT'] = os.environ[env_key] + ret["GOOGLE_CLOUD_PROJECT"] = os.environ[env_key] # Apply user supplied envs. - ret.update(TEST_CONFIG['envs']) + ret.update(TEST_CONFIG["envs"]) return ret @@ -87,7 +84,7 @@ def get_pytest_env_vars(): ALL_VERSIONS = ["2.7", "3.6", "3.7", "3.8"] # Any default versions that should be ignored. -IGNORED_VERSIONS = TEST_CONFIG['ignored_versions'] +IGNORED_VERSIONS = TEST_CONFIG["ignored_versions"] TESTED_VERSIONS = sorted([v for v in ALL_VERSIONS if v not in IGNORED_VERSIONS]) @@ -136,7 +133,7 @@ def _determine_local_import_names(start_dir): @nox.session def lint(session): - if not TEST_CONFIG['enforce_type_hints']: + if not TEST_CONFIG["enforce_type_hints"]: session.install("flake8", "flake8-import-order") else: session.install("flake8", "flake8-import-order", "flake8-annotations") @@ -145,9 +142,11 @@ def lint(session): args = FLAKE8_COMMON_ARGS + [ "--application-import-names", ",".join(local_names), - "." + ".", ] session.run("flake8", *args) + + # # Black # @@ -160,6 +159,7 @@ def blacken(session): session.run("black", *python_files) + # # Sample Tests # @@ -199,9 +199,9 @@ def py(session): if session.python in TESTED_VERSIONS: _session_tests(session) else: - session.skip("SKIPPED: {} tests are disabled for this sample.".format( - session.python - )) + session.skip( + "SKIPPED: {} tests are disabled for this sample.".format(session.python) + ) # diff --git a/packages/google-cloud-videointelligence/samples/analyze/video_detect_faces_beta.py b/packages/google-cloud-videointelligence/samples/analyze/video_detect_faces_beta.py index 9d882d488e94..6e6a11f4d428 100644 --- a/packages/google-cloud-videointelligence/samples/analyze/video_detect_faces_beta.py +++ b/packages/google-cloud-videointelligence/samples/analyze/video_detect_faces_beta.py @@ -27,16 +27,18 @@ def detect_faces(local_file_path="path/to/your/video-file.mp4"): input_content = f.read() # Configure the request - config = videointelligence.types.FaceDetectionConfig( + config = videointelligence.FaceDetectionConfig( include_bounding_boxes=True, include_attributes=True ) - context = videointelligence.types.VideoContext(face_detection_config=config) + context = videointelligence.VideoContext(face_detection_config=config) # Start the asynchronous request operation = client.annotate_video( - input_content=input_content, - features=[videointelligence.enums.Feature.FACE_DETECTION], - video_context=context, + request={ + "features": [videointelligence.Feature.FACE_DETECTION], + "input_content": input_content, + "video_context": context, + } ) print("\nProcessing video for face detection annotations.") @@ -53,9 +55,9 @@ def detect_faces(local_file_path="path/to/your/video-file.mp4"): print( "Segment: {}s to {}s".format( track.segment.start_time_offset.seconds - + track.segment.start_time_offset.nanos / 1e9, + + track.segment.start_time_offset.microseconds / 1e6, track.segment.end_time_offset.seconds - + track.segment.end_time_offset.nanos / 1e9, + + track.segment.end_time_offset.microseconds / 1e6, ) ) diff --git a/packages/google-cloud-videointelligence/samples/analyze/video_detect_faces_gcs_beta.py b/packages/google-cloud-videointelligence/samples/analyze/video_detect_faces_gcs_beta.py index 262184b673ab..6646d6a54e33 100644 --- a/packages/google-cloud-videointelligence/samples/analyze/video_detect_faces_gcs_beta.py +++ b/packages/google-cloud-videointelligence/samples/analyze/video_detect_faces_gcs_beta.py @@ -22,16 +22,18 @@ def detect_faces(gcs_uri="gs://YOUR_BUCKET_ID/path/to/your/video.mp4"): client = videointelligence.VideoIntelligenceServiceClient() # Configure the request - config = videointelligence.types.FaceDetectionConfig( + config = videointelligence.FaceDetectionConfig( include_bounding_boxes=True, include_attributes=True ) - context = videointelligence.types.VideoContext(face_detection_config=config) + context = videointelligence.VideoContext(face_detection_config=config) # Start the asynchronous request operation = client.annotate_video( - input_uri=gcs_uri, - features=[videointelligence.enums.Feature.FACE_DETECTION], - video_context=context, + request={ + "features": [videointelligence.Feature.FACE_DETECTION], + "input_uri": gcs_uri, + "video_context": context, + } ) print("\nProcessing video for face detection annotations.") @@ -48,9 +50,9 @@ def detect_faces(gcs_uri="gs://YOUR_BUCKET_ID/path/to/your/video.mp4"): print( "Segment: {}s to {}s".format( track.segment.start_time_offset.seconds - + track.segment.start_time_offset.nanos / 1e9, + + track.segment.start_time_offset.microseconds / 1e6, track.segment.end_time_offset.seconds - + track.segment.end_time_offset.nanos / 1e9, + + track.segment.end_time_offset.microseconds / 1e6, ) ) diff --git a/packages/google-cloud-videointelligence/samples/analyze/video_detect_logo.py b/packages/google-cloud-videointelligence/samples/analyze/video_detect_logo.py index 319e1c8745a8..fbd0018f0c0b 100644 --- a/packages/google-cloud-videointelligence/samples/analyze/video_detect_logo.py +++ b/packages/google-cloud-videointelligence/samples/analyze/video_detect_logo.py @@ -26,9 +26,11 @@ def detect_logo(local_file_path="path/to/your/video.mp4"): with io.open(local_file_path, "rb") as f: input_content = f.read() - features = [videointelligence.enums.Feature.LOGO_RECOGNITION] + features = [videointelligence.Feature.LOGO_RECOGNITION] - operation = client.annotate_video(input_content=input_content, features=features) + operation = client.annotate_video( + request={"features": features, "input_content": input_content} + ) print(u"Waiting for operation to complete...") response = operation.result() @@ -53,13 +55,13 @@ def detect_logo(local_file_path="path/to/your/video.mp4"): print( u"\n\tStart Time Offset : {}.{}".format( track.segment.start_time_offset.seconds, - track.segment.start_time_offset.nanos, + track.segment.start_time_offset.microseconds * 1000, ) ) print( u"\tEnd Time Offset : {}.{}".format( track.segment.end_time_offset.seconds, - track.segment.end_time_offset.nanos, + track.segment.end_time_offset.microseconds * 1000, ) ) print(u"\tConfidence : {}".format(track.confidence)) @@ -91,12 +93,14 @@ def detect_logo(local_file_path="path/to/your/video.mp4"): for segment in logo_recognition_annotation.segments: print( u"\n\tStart Time Offset : {}.{}".format( - segment.start_time_offset.seconds, segment.start_time_offset.nanos, + segment.start_time_offset.seconds, + segment.start_time_offset.microseconds * 1000, ) ) print( u"\tEnd Time Offset : {}.{}".format( - segment.end_time_offset.seconds, segment.end_time_offset.nanos, + segment.end_time_offset.seconds, + segment.end_time_offset.microseconds * 1000, ) ) diff --git a/packages/google-cloud-videointelligence/samples/analyze/video_detect_logo_gcs.py b/packages/google-cloud-videointelligence/samples/analyze/video_detect_logo_gcs.py index a999888555f3..68db2e8bb634 100644 --- a/packages/google-cloud-videointelligence/samples/analyze/video_detect_logo_gcs.py +++ b/packages/google-cloud-videointelligence/samples/analyze/video_detect_logo_gcs.py @@ -21,9 +21,11 @@ def detect_logo_gcs(input_uri="gs://YOUR_BUCKET_ID/path/to/your/file.mp4"): client = videointelligence.VideoIntelligenceServiceClient() - features = [videointelligence.enums.Feature.LOGO_RECOGNITION] + features = [videointelligence.Feature.LOGO_RECOGNITION] - operation = client.annotate_video(input_uri=input_uri, features=features) + operation = client.annotate_video( + request={"features": features, "input_uri": input_uri} + ) print(u"Waiting for operation to complete...") response = operation.result() @@ -49,13 +51,13 @@ def detect_logo_gcs(input_uri="gs://YOUR_BUCKET_ID/path/to/your/file.mp4"): print( u"\n\tStart Time Offset : {}.{}".format( track.segment.start_time_offset.seconds, - track.segment.start_time_offset.nanos, + track.segment.start_time_offset.microseconds * 1000, ) ) print( u"\tEnd Time Offset : {}.{}".format( track.segment.end_time_offset.seconds, - track.segment.end_time_offset.nanos, + track.segment.end_time_offset.microseconds * 1000, ) ) print(u"\tConfidence : {}".format(track.confidence)) @@ -86,12 +88,14 @@ def detect_logo_gcs(input_uri="gs://YOUR_BUCKET_ID/path/to/your/file.mp4"): for segment in logo_recognition_annotation.segments: print( u"\n\tStart Time Offset : {}.{}".format( - segment.start_time_offset.seconds, segment.start_time_offset.nanos, + segment.start_time_offset.seconds, + segment.start_time_offset.microseconds * 1000, ) ) print( u"\tEnd Time Offset : {}.{}".format( - segment.end_time_offset.seconds, segment.end_time_offset.nanos, + segment.end_time_offset.seconds, + segment.end_time_offset.microseconds * 1000, ) ) diff --git a/packages/google-cloud-videointelligence/samples/analyze/video_detect_person_beta.py b/packages/google-cloud-videointelligence/samples/analyze/video_detect_person_beta.py index a7afd34cab49..143d1b7d1514 100644 --- a/packages/google-cloud-videointelligence/samples/analyze/video_detect_person_beta.py +++ b/packages/google-cloud-videointelligence/samples/analyze/video_detect_person_beta.py @@ -36,9 +36,11 @@ def detect_person(local_file_path="path/to/your/video-file.mp4"): # Start the asynchronous request operation = client.annotate_video( - input_content=input_content, - features=[videointelligence.enums.Feature.PERSON_DETECTION], - video_context=context, + request={ + "features": [videointelligence.Feature.PERSON_DETECTION], + "input_content": input_content, + "video_context": context, + } ) print("\nProcessing video for person detection annotations.") @@ -55,9 +57,9 @@ def detect_person(local_file_path="path/to/your/video-file.mp4"): print( "Segment: {}s to {}s".format( track.segment.start_time_offset.seconds - + track.segment.start_time_offset.nanos / 1e9, + + track.segment.start_time_offset.microseconds / 1e6, track.segment.end_time_offset.seconds - + track.segment.end_time_offset.nanos / 1e9, + + track.segment.end_time_offset.microseconds / 1e6, ) ) diff --git a/packages/google-cloud-videointelligence/samples/analyze/video_detect_person_gcs_beta.py b/packages/google-cloud-videointelligence/samples/analyze/video_detect_person_gcs_beta.py index 9bd2d007b73f..7496fd254ca6 100644 --- a/packages/google-cloud-videointelligence/samples/analyze/video_detect_person_gcs_beta.py +++ b/packages/google-cloud-videointelligence/samples/analyze/video_detect_person_gcs_beta.py @@ -31,9 +31,11 @@ def detect_person(gcs_uri="gs://YOUR_BUCKET_ID/path/to/your/video.mp4"): # Start the asynchronous request operation = client.annotate_video( - input_uri=gcs_uri, - features=[videointelligence.enums.Feature.PERSON_DETECTION], - video_context=context, + request={ + "features": [videointelligence.Feature.PERSON_DETECTION], + "input_uri": gcs_uri, + "video_context": context, + } ) print("\nProcessing video for person detection annotations.") @@ -50,9 +52,9 @@ def detect_person(gcs_uri="gs://YOUR_BUCKET_ID/path/to/your/video.mp4"): print( "Segment: {}s to {}s".format( track.segment.start_time_offset.seconds - + track.segment.start_time_offset.nanos / 1e9, + + track.segment.start_time_offset.microseconds / 1e6, track.segment.end_time_offset.seconds - + track.segment.end_time_offset.nanos / 1e9, + + track.segment.end_time_offset.microseconds / 1e6, ) ) diff --git a/packages/google-cloud-videointelligence/samples/labels/labels.py b/packages/google-cloud-videointelligence/samples/labels/labels.py index cfb4ad0c4259..8f8c31ddd0c1 100644 --- a/packages/google-cloud-videointelligence/samples/labels/labels.py +++ b/packages/google-cloud-videointelligence/samples/labels/labels.py @@ -32,6 +32,7 @@ import argparse from google.cloud import videointelligence + # [END video_label_tutorial_imports] @@ -39,44 +40,50 @@ def analyze_labels(path): """ Detects labels given a GCS path. """ # [START video_label_tutorial_construct_request] video_client = videointelligence.VideoIntelligenceServiceClient() - features = [videointelligence.enums.Feature.LABEL_DETECTION] - operation = video_client.annotate_video(input_uri=path, features=features) + features = [videointelligence.Feature.LABEL_DETECTION] + operation = video_client.annotate_video( + request={"features": features, "input_uri": path} + ) # [END video_label_tutorial_construct_request] - print('\nProcessing video for label annotations:') + print("\nProcessing video for label annotations:") # [START video_label_tutorial_check_operation] result = operation.result(timeout=90) - print('\nFinished processing.') + print("\nFinished processing.") # [END video_label_tutorial_check_operation] # [START video_label_tutorial_parse_response] segment_labels = result.annotation_results[0].segment_label_annotations for i, segment_label in enumerate(segment_labels): - print('Video label description: {}'.format( - segment_label.entity.description)) + print("Video label description: {}".format(segment_label.entity.description)) for category_entity in segment_label.category_entities: - print('\tLabel category description: {}'.format( - category_entity.description)) + print( + "\tLabel category description: {}".format(category_entity.description) + ) for i, segment in enumerate(segment_label.segments): - start_time = (segment.segment.start_time_offset.seconds + - segment.segment.start_time_offset.nanos / 1e9) - end_time = (segment.segment.end_time_offset.seconds + - segment.segment.end_time_offset.nanos / 1e9) - positions = '{}s to {}s'.format(start_time, end_time) + start_time = ( + segment.segment.start_time_offset.seconds + + segment.segment.start_time_offset.microseconds / 1e6 + ) + end_time = ( + segment.segment.end_time_offset.seconds + + segment.segment.end_time_offset.microseconds / 1e6 + ) + positions = "{}s to {}s".format(start_time, end_time) confidence = segment.confidence - print('\tSegment {}: {}'.format(i, positions)) - print('\tConfidence: {}'.format(confidence)) - print('\n') + print("\tSegment {}: {}".format(i, positions)) + print("\tConfidence: {}".format(confidence)) + print("\n") # [END video_label_tutorial_parse_response] -if __name__ == '__main__': +if __name__ == "__main__": # [START video_label_tutorial_run_application] parser = argparse.ArgumentParser( - description=__doc__, - formatter_class=argparse.RawDescriptionHelpFormatter) - parser.add_argument('path', help='GCS file path for label detection.') + description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter + ) + parser.add_argument("path", help="GCS file path for label detection.") args = parser.parse_args() analyze_labels(args.path) diff --git a/packages/google-cloud-videointelligence/samples/labels/labels_test.py b/packages/google-cloud-videointelligence/samples/labels/labels_test.py index 1249f5b95cbe..c678ddb43adc 100644 --- a/packages/google-cloud-videointelligence/samples/labels/labels_test.py +++ b/packages/google-cloud-videointelligence/samples/labels/labels_test.py @@ -21,6 +21,6 @@ @pytest.mark.slow def test_feline_video_labels(capsys): - labels.analyze_labels('gs://cloud-samples-data/video/cat.mp4') + labels.analyze_labels("gs://cloud-samples-data/video/cat.mp4") out, _ = capsys.readouterr() - assert 'Video label description: cat' in out + assert "Video label description: cat" in out diff --git a/packages/google-cloud-videointelligence/samples/labels/noxfile.py b/packages/google-cloud-videointelligence/samples/labels/noxfile.py index b90eef00f2d9..ab2c49227c3b 100644 --- a/packages/google-cloud-videointelligence/samples/labels/noxfile.py +++ b/packages/google-cloud-videointelligence/samples/labels/noxfile.py @@ -37,28 +37,25 @@ TEST_CONFIG = { # You can opt out from the test for specific Python versions. - 'ignored_versions': ["2.7"], - + "ignored_versions": ["2.7"], # Old samples are opted out of enforcing Python type hints # All new samples should feature them - 'enforce_type_hints': False, - + "enforce_type_hints": False, # An envvar key for determining the project id to use. Change it # to 'BUILD_SPECIFIC_GCLOUD_PROJECT' if you want to opt in using a # build specific Cloud project. You can also use your own string # to use your own Cloud project. - 'gcloud_project_env': 'GOOGLE_CLOUD_PROJECT', + "gcloud_project_env": "GOOGLE_CLOUD_PROJECT", # 'gcloud_project_env': 'BUILD_SPECIFIC_GCLOUD_PROJECT', - # A dictionary you want to inject into your test. Don't put any # secrets here. These values will override predefined values. - 'envs': {}, + "envs": {}, } try: # Ensure we can import noxfile_config in the project's directory. - sys.path.append('.') + sys.path.append(".") from noxfile_config import TEST_CONFIG_OVERRIDE except ImportError as e: print("No user noxfile_config found: detail: {}".format(e)) @@ -73,12 +70,12 @@ def get_pytest_env_vars(): ret = {} # Override the GCLOUD_PROJECT and the alias. - env_key = TEST_CONFIG['gcloud_project_env'] + env_key = TEST_CONFIG["gcloud_project_env"] # This should error out if not set. - ret['GOOGLE_CLOUD_PROJECT'] = os.environ[env_key] + ret["GOOGLE_CLOUD_PROJECT"] = os.environ[env_key] # Apply user supplied envs. - ret.update(TEST_CONFIG['envs']) + ret.update(TEST_CONFIG["envs"]) return ret @@ -87,7 +84,7 @@ def get_pytest_env_vars(): ALL_VERSIONS = ["2.7", "3.6", "3.7", "3.8"] # Any default versions that should be ignored. -IGNORED_VERSIONS = TEST_CONFIG['ignored_versions'] +IGNORED_VERSIONS = TEST_CONFIG["ignored_versions"] TESTED_VERSIONS = sorted([v for v in ALL_VERSIONS if v not in IGNORED_VERSIONS]) @@ -136,7 +133,7 @@ def _determine_local_import_names(start_dir): @nox.session def lint(session): - if not TEST_CONFIG['enforce_type_hints']: + if not TEST_CONFIG["enforce_type_hints"]: session.install("flake8", "flake8-import-order") else: session.install("flake8", "flake8-import-order", "flake8-annotations") @@ -145,9 +142,11 @@ def lint(session): args = FLAKE8_COMMON_ARGS + [ "--application-import-names", ",".join(local_names), - "." + ".", ] session.run("flake8", *args) + + # # Black # @@ -160,6 +159,7 @@ def blacken(session): session.run("black", *python_files) + # # Sample Tests # @@ -199,9 +199,9 @@ def py(session): if session.python in TESTED_VERSIONS: _session_tests(session) else: - session.skip("SKIPPED: {} tests are disabled for this sample.".format( - session.python - )) + session.skip( + "SKIPPED: {} tests are disabled for this sample.".format(session.python) + ) # diff --git a/packages/google-cloud-videointelligence/samples/quickstart/noxfile.py b/packages/google-cloud-videointelligence/samples/quickstart/noxfile.py index b90eef00f2d9..ab2c49227c3b 100644 --- a/packages/google-cloud-videointelligence/samples/quickstart/noxfile.py +++ b/packages/google-cloud-videointelligence/samples/quickstart/noxfile.py @@ -37,28 +37,25 @@ TEST_CONFIG = { # You can opt out from the test for specific Python versions. - 'ignored_versions': ["2.7"], - + "ignored_versions": ["2.7"], # Old samples are opted out of enforcing Python type hints # All new samples should feature them - 'enforce_type_hints': False, - + "enforce_type_hints": False, # An envvar key for determining the project id to use. Change it # to 'BUILD_SPECIFIC_GCLOUD_PROJECT' if you want to opt in using a # build specific Cloud project. You can also use your own string # to use your own Cloud project. - 'gcloud_project_env': 'GOOGLE_CLOUD_PROJECT', + "gcloud_project_env": "GOOGLE_CLOUD_PROJECT", # 'gcloud_project_env': 'BUILD_SPECIFIC_GCLOUD_PROJECT', - # A dictionary you want to inject into your test. Don't put any # secrets here. These values will override predefined values. - 'envs': {}, + "envs": {}, } try: # Ensure we can import noxfile_config in the project's directory. - sys.path.append('.') + sys.path.append(".") from noxfile_config import TEST_CONFIG_OVERRIDE except ImportError as e: print("No user noxfile_config found: detail: {}".format(e)) @@ -73,12 +70,12 @@ def get_pytest_env_vars(): ret = {} # Override the GCLOUD_PROJECT and the alias. - env_key = TEST_CONFIG['gcloud_project_env'] + env_key = TEST_CONFIG["gcloud_project_env"] # This should error out if not set. - ret['GOOGLE_CLOUD_PROJECT'] = os.environ[env_key] + ret["GOOGLE_CLOUD_PROJECT"] = os.environ[env_key] # Apply user supplied envs. - ret.update(TEST_CONFIG['envs']) + ret.update(TEST_CONFIG["envs"]) return ret @@ -87,7 +84,7 @@ def get_pytest_env_vars(): ALL_VERSIONS = ["2.7", "3.6", "3.7", "3.8"] # Any default versions that should be ignored. -IGNORED_VERSIONS = TEST_CONFIG['ignored_versions'] +IGNORED_VERSIONS = TEST_CONFIG["ignored_versions"] TESTED_VERSIONS = sorted([v for v in ALL_VERSIONS if v not in IGNORED_VERSIONS]) @@ -136,7 +133,7 @@ def _determine_local_import_names(start_dir): @nox.session def lint(session): - if not TEST_CONFIG['enforce_type_hints']: + if not TEST_CONFIG["enforce_type_hints"]: session.install("flake8", "flake8-import-order") else: session.install("flake8", "flake8-import-order", "flake8-annotations") @@ -145,9 +142,11 @@ def lint(session): args = FLAKE8_COMMON_ARGS + [ "--application-import-names", ",".join(local_names), - "." + ".", ] session.run("flake8", *args) + + # # Black # @@ -160,6 +159,7 @@ def blacken(session): session.run("black", *python_files) + # # Sample Tests # @@ -199,9 +199,9 @@ def py(session): if session.python in TESTED_VERSIONS: _session_tests(session) else: - session.skip("SKIPPED: {} tests are disabled for this sample.".format( - session.python - )) + session.skip( + "SKIPPED: {} tests are disabled for this sample.".format(session.python) + ) # diff --git a/packages/google-cloud-videointelligence/samples/quickstart/quickstart.py b/packages/google-cloud-videointelligence/samples/quickstart/quickstart.py index 19d126f01cb3..89f75d866825 100644 --- a/packages/google-cloud-videointelligence/samples/quickstart/quickstart.py +++ b/packages/google-cloud-videointelligence/samples/quickstart/quickstart.py @@ -28,9 +28,12 @@ def run_quickstart(): from google.cloud import videointelligence video_client = videointelligence.VideoIntelligenceServiceClient() - features = [videointelligence.enums.Feature.LABEL_DETECTION] + features = [videointelligence.Feature.LABEL_DETECTION] operation = video_client.annotate_video( - "gs://cloud-samples-data/video/cat.mp4", features=features + request={ + "features": features, + "input_uri": "gs://cloud-samples-data/video/cat.mp4", + } ) print("\nProcessing video for label annotations:") @@ -49,11 +52,11 @@ def run_quickstart(): for i, segment in enumerate(segment_label.segments): start_time = ( segment.segment.start_time_offset.seconds - + segment.segment.start_time_offset.nanos / 1e9 + + segment.segment.start_time_offset.microseconds / 1e6 ) end_time = ( segment.segment.end_time_offset.seconds - + segment.segment.end_time_offset.nanos / 1e9 + + segment.segment.end_time_offset.microseconds / 1e6 ) positions = "{}s to {}s".format(start_time, end_time) confidence = segment.confidence diff --git a/packages/google-cloud-videointelligence/samples/shotchange/noxfile.py b/packages/google-cloud-videointelligence/samples/shotchange/noxfile.py index b90eef00f2d9..e38c11b7a7e2 100644 --- a/packages/google-cloud-videointelligence/samples/shotchange/noxfile.py +++ b/packages/google-cloud-videointelligence/samples/shotchange/noxfile.py @@ -37,28 +37,25 @@ TEST_CONFIG = { # You can opt out from the test for specific Python versions. - 'ignored_versions': ["2.7"], - + "ignored_versions": ["2.7"], # Old samples are opted out of enforcing Python type hints # All new samples should feature them - 'enforce_type_hints': False, - + "enforce_type_hints": False, # An envvar key for determining the project id to use. Change it # to 'BUILD_SPECIFIC_GCLOUD_PROJECT' if you want to opt in using a # build specific Cloud project. You can also use your own string # to use your own Cloud project. - 'gcloud_project_env': 'GOOGLE_CLOUD_PROJECT', + "gcloud_project_env": "GOOGLE_CLOUD_PROJECT", # 'gcloud_project_env': 'BUILD_SPECIFIC_GCLOUD_PROJECT', - # A dictionary you want to inject into your test. Don't put any # secrets here. These values will override predefined values. - 'envs': {}, + "envs": {}, } try: # Ensure we can import noxfile_config in the project's directory. - sys.path.append('.') + sys.path.append(".") from noxfile_config import TEST_CONFIG_OVERRIDE except ImportError as e: print("No user noxfile_config found: detail: {}".format(e)) @@ -73,12 +70,13 @@ def get_pytest_env_vars(): ret = {} # Override the GCLOUD_PROJECT and the alias. - env_key = TEST_CONFIG['gcloud_project_env'] + env_key = TEST_CONFIG["gcloud_project_env"] # This should error out if not set. - ret['GOOGLE_CLOUD_PROJECT'] = os.environ[env_key] + ret["GOOGLE_CLOUD_PROJECT"] = os.environ[env_key] # Apply user supplied envs. - ret.update(TEST_CONFIG['envs']) + ret.update(TEST_CONFIG["envs"]) + return ret @@ -87,7 +85,7 @@ def get_pytest_env_vars(): ALL_VERSIONS = ["2.7", "3.6", "3.7", "3.8"] # Any default versions that should be ignored. -IGNORED_VERSIONS = TEST_CONFIG['ignored_versions'] +IGNORED_VERSIONS = TEST_CONFIG["ignored_versions"] TESTED_VERSIONS = sorted([v for v in ALL_VERSIONS if v not in IGNORED_VERSIONS]) @@ -136,7 +134,7 @@ def _determine_local_import_names(start_dir): @nox.session def lint(session): - if not TEST_CONFIG['enforce_type_hints']: + if not TEST_CONFIG["enforce_type_hints"]: session.install("flake8", "flake8-import-order") else: session.install("flake8", "flake8-import-order", "flake8-annotations") @@ -145,9 +143,11 @@ def lint(session): args = FLAKE8_COMMON_ARGS + [ "--application-import-names", ",".join(local_names), - "." + ".", ] session.run("flake8", *args) + + # # Black # @@ -160,6 +160,7 @@ def blacken(session): session.run("black", *python_files) + # # Sample Tests # @@ -199,9 +200,9 @@ def py(session): if session.python in TESTED_VERSIONS: _session_tests(session) else: - session.skip("SKIPPED: {} tests are disabled for this sample.".format( - session.python - )) + session.skip( + "SKIPPED: {} tests are disabled for this sample.".format(session.python) + ) # diff --git a/packages/google-cloud-videointelligence/samples/shotchange/shotchange.py b/packages/google-cloud-videointelligence/samples/shotchange/shotchange.py index 40edf0012cec..241289f96a62 100644 --- a/packages/google-cloud-videointelligence/samples/shotchange/shotchange.py +++ b/packages/google-cloud-videointelligence/samples/shotchange/shotchange.py @@ -31,6 +31,7 @@ import argparse from google.cloud import videointelligence + # [END video_shot_tutorial_imports] @@ -38,32 +39,37 @@ def analyze_shots(path): """ Detects camera shot changes. """ # [START video_shot_tutorial_construct_request] video_client = videointelligence.VideoIntelligenceServiceClient() - features = [videointelligence.enums.Feature.SHOT_CHANGE_DETECTION] - operation = video_client.annotate_video(input_uri=path, features=features) + features = [videointelligence.Feature.SHOT_CHANGE_DETECTION] + operation = video_client.annotate_video( + request={"features": features, "input_uri": path} + ) # [END video_shot_tutorial_construct_request] - print('\nProcessing video for shot change annotations:') + print("\nProcessing video for shot change annotations:") # [START video_shot_tutorial_check_operation] result = operation.result(timeout=120) - print('\nFinished processing.') + print("\nFinished processing.") + # [END video_shot_tutorial_check_operation] # [START video_shot_tutorial_parse_response] for i, shot in enumerate(result.annotation_results[0].shot_annotations): - start_time = (shot.start_time_offset.seconds + - shot.start_time_offset.nanos / 1e9) - end_time = (shot.end_time_offset.seconds + - shot.end_time_offset.nanos / 1e9) - print('\tShot {}: {} to {}'.format(i, start_time, end_time)) + start_time = ( + shot.start_time_offset.seconds + shot.start_time_offset.microseconds / 1e6 + ) + end_time = ( + shot.end_time_offset.seconds + shot.end_time_offset.microseconds / 1e6 + ) + print("\tShot {}: {} to {}".format(i, start_time, end_time)) # [END video_shot_tutorial_parse_response] -if __name__ == '__main__': +if __name__ == "__main__": # [START video_shot_tutorial_run_application] parser = argparse.ArgumentParser( - description=__doc__, - formatter_class=argparse.RawDescriptionHelpFormatter) - parser.add_argument('path', help='GCS path for shot change detection.') + description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter + ) + parser.add_argument("path", help="GCS path for shot change detection.") args = parser.parse_args() analyze_shots(args.path) diff --git a/packages/google-cloud-videointelligence/samples/shotchange/shotchange_test.py b/packages/google-cloud-videointelligence/samples/shotchange/shotchange_test.py index 0722e0c66aaf..bb480351673d 100644 --- a/packages/google-cloud-videointelligence/samples/shotchange/shotchange_test.py +++ b/packages/google-cloud-videointelligence/samples/shotchange/shotchange_test.py @@ -21,7 +21,6 @@ @pytest.mark.slow def test_shots_dino(capsys): - shotchange.analyze_shots( - 'gs://cloud-samples-data/video/gbikes_dinosaur.mp4') + shotchange.analyze_shots("gs://cloud-samples-data/video/gbikes_dinosaur.mp4") out, _ = capsys.readouterr() - assert 'Shot 1:' in out + assert "Shot 1:" in out diff --git a/packages/google-cloud-videointelligence/scripts/fixup_keywords.py b/packages/google-cloud-videointelligence/scripts/fixup_keywords.py new file mode 100644 index 000000000000..cda96fc572ce --- /dev/null +++ b/packages/google-cloud-videointelligence/scripts/fixup_keywords.py @@ -0,0 +1,179 @@ +#! /usr/bin/env python3 +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import argparse +import os +import libcst as cst +import pathlib +import sys +from typing import (Any, Callable, Dict, List, Sequence, Tuple) + + +def partition( + predicate: Callable[[Any], bool], + iterator: Sequence[Any] +) -> Tuple[List[Any], List[Any]]: + """A stable, out-of-place partition.""" + results = ([], []) + + for i in iterator: + results[int(predicate(i))].append(i) + + # Returns trueList, falseList + return results[1], results[0] + + +class videointelligenceCallTransformer(cst.CSTTransformer): + CTRL_PARAMS: Tuple[str] = ('retry', 'timeout', 'metadata') + METHOD_TO_PARAMS: Dict[str, Tuple[str]] = { + 'annotate_video': ('features', 'input_uri', 'input_content', 'video_context', 'output_uri', 'location_id', ), + + } + + def leave_Call(self, original: cst.Call, updated: cst.Call) -> cst.CSTNode: + try: + key = original.func.attr.value + kword_params = self.METHOD_TO_PARAMS[key] + except (AttributeError, KeyError): + # Either not a method from the API or too convoluted to be sure. + return updated + + # If the existing code is valid, keyword args come after positional args. + # Therefore, all positional args must map to the first parameters. + args, kwargs = partition(lambda a: not bool(a.keyword), updated.args) + if any(k.keyword.value == "request" for k in kwargs): + # We've already fixed this file, don't fix it again. + return updated + + kwargs, ctrl_kwargs = partition( + lambda a: not a.keyword.value in self.CTRL_PARAMS, + kwargs + ) + + args, ctrl_args = args[:len(kword_params)], args[len(kword_params):] + ctrl_kwargs.extend(cst.Arg(value=a.value, keyword=cst.Name(value=ctrl)) + for a, ctrl in zip(ctrl_args, self.CTRL_PARAMS)) + + request_arg = cst.Arg( + value=cst.Dict([ + cst.DictElement( + cst.SimpleString("'{}'".format(name)), + cst.Element(value=arg.value) + ) + # Note: the args + kwargs looks silly, but keep in mind that + # the control parameters had to be stripped out, and that + # those could have been passed positionally or by keyword. + for name, arg in zip(kword_params, args + kwargs)]), + keyword=cst.Name("request") + ) + + return updated.with_changes( + args=[request_arg] + ctrl_kwargs + ) + + +def fix_files( + in_dir: pathlib.Path, + out_dir: pathlib.Path, + *, + transformer=videointelligenceCallTransformer(), +): + """Duplicate the input dir to the output dir, fixing file method calls. + + Preconditions: + * in_dir is a real directory + * out_dir is a real, empty directory + """ + pyfile_gen = ( + pathlib.Path(os.path.join(root, f)) + for root, _, files in os.walk(in_dir) + for f in files if os.path.splitext(f)[1] == ".py" + ) + + for fpath in pyfile_gen: + with open(fpath, 'r') as f: + src = f.read() + + # Parse the code and insert method call fixes. + tree = cst.parse_module(src) + updated = tree.visit(transformer) + + # Create the path and directory structure for the new file. + updated_path = out_dir.joinpath(fpath.relative_to(in_dir)) + updated_path.parent.mkdir(parents=True, exist_ok=True) + + # Generate the updated source file at the corresponding path. + with open(updated_path, 'w') as f: + f.write(updated.code) + + +if __name__ == '__main__': + parser = argparse.ArgumentParser( + description="""Fix up source that uses the videointelligence client library. + +The existing sources are NOT overwritten but are copied to output_dir with changes made. + +Note: This tool operates at a best-effort level at converting positional + parameters in client method calls to keyword based parameters. + Cases where it WILL FAIL include + A) * or ** expansion in a method call. + B) Calls via function or method alias (includes free function calls) + C) Indirect or dispatched calls (e.g. the method is looked up dynamically) + + These all constitute false negatives. The tool will also detect false + positives when an API method shares a name with another method. +""") + parser.add_argument( + '-d', + '--input-directory', + required=True, + dest='input_dir', + help='the input directory to walk for python files to fix up', + ) + parser.add_argument( + '-o', + '--output-directory', + required=True, + dest='output_dir', + help='the directory to output files fixed via un-flattening', + ) + args = parser.parse_args() + input_dir = pathlib.Path(args.input_dir) + output_dir = pathlib.Path(args.output_dir) + if not input_dir.is_dir(): + print( + f"input directory '{input_dir}' does not exist or is not a directory", + file=sys.stderr, + ) + sys.exit(-1) + + if not output_dir.is_dir(): + print( + f"output directory '{output_dir}' does not exist or is not a directory", + file=sys.stderr, + ) + sys.exit(-1) + + if os.listdir(output_dir): + print( + f"output directory '{output_dir}' is not empty", + file=sys.stderr, + ) + sys.exit(-1) + + fix_files(input_dir, output_dir) diff --git a/packages/google-cloud-videointelligence/setup.py b/packages/google-cloud-videointelligence/setup.py index 61ea4d7f36db..a598098a852e 100644 --- a/packages/google-cloud-videointelligence/setup.py +++ b/packages/google-cloud-videointelligence/setup.py @@ -28,8 +28,9 @@ # 'Development Status :: 4 - Beta' # 'Development Status :: 5 - Production/Stable' release_status = "Development Status :: 5 - Production/Stable" -dependencies = ["google-api-core[grpc] >= 1.14.0, < 2.0.0dev"] -extras = {} +dependencies = ["google-api-core[grpc] >= 1.22.0, < 2.0.0dev", "proto-plus >= 1.10.0"] +extras = {"libcst": "libcst >= 0.2.5"} +scripts = ["scripts/fixup_keywords.py"] # Setup boilerplate below this line. @@ -43,7 +44,9 @@ # Only include packages under the 'google' namespace. Do not include tests, # benchmarks, etc. packages = [ - package for package in setuptools.find_packages() if package.startswith("google") + package + for package in setuptools.PEP420PackageFinder.find() + if package.startswith("google") ] # Determine which namespaces are needed. @@ -66,10 +69,7 @@ "Intended Audience :: Developers", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python", - "Programming Language :: Python :: 2", - "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3", - "Programming Language :: Python :: 3.5", "Programming Language :: Python :: 3.6", "Programming Language :: Python :: 3.7", "Operating System :: OS Independent", @@ -80,7 +80,8 @@ namespace_packages=namespaces, install_requires=dependencies, extras_require=extras, - python_requires=">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*", + python_requires=">=3.6", + scripts=scripts, include_package_data=True, zip_safe=False, ) diff --git a/packages/google-cloud-videointelligence/synth.metadata b/packages/google-cloud-videointelligence/synth.metadata index 8791655fbb2f..35b127b8d2fd 100644 --- a/packages/google-cloud-videointelligence/synth.metadata +++ b/packages/google-cloud-videointelligence/synth.metadata @@ -4,29 +4,29 @@ "git": { "name": ".", "remote": "git@github.com:googleapis/python-videointelligence.git", - "sha": "8ecf08ee0310d149f83c6496b7fa78f603122d8a" + "sha": "128b6c8cc9fcf6da538f0fd0adcd5dd885431ccb" } }, { "git": { "name": "googleapis", "remote": "https://github.com/googleapis/googleapis.git", - "sha": "6a69c750c3f01a69017662395f90515bbf1fe1ff", - "internalRef": "342721036" + "sha": "5fdb685a684269e07c10c7518372eb5d7b6bc0a9", + "internalRef": "342906697" } }, { "git": { "name": "synthtool", "remote": "https://github.com/googleapis/synthtool.git", - "sha": "d5fc0bcf9ea9789c5b0e3154a9e3b29e5cea6116" + "sha": "7fcc405a579d5d53a726ff3da1b7c8c08f0f2d58" } }, { "git": { "name": "synthtool", "remote": "https://github.com/googleapis/synthtool.git", - "sha": "d5fc0bcf9ea9789c5b0e3154a9e3b29e5cea6116" + "sha": "7fcc405a579d5d53a726ff3da1b7c8c08f0f2d58" } } ], diff --git a/packages/google-cloud-videointelligence/synth.py b/packages/google-cloud-videointelligence/synth.py index 1da1d4ed6aeb..8e6130153738 100644 --- a/packages/google-cloud-videointelligence/synth.py +++ b/packages/google-cloud-videointelligence/synth.py @@ -14,7 +14,6 @@ """This script is used to synthesize generated parts of this library.""" import logging -import re import synthtool as s @@ -27,7 +26,6 @@ common = gcp.CommonTemplates() versions = ["v1beta2", "v1p1beta1", "v1p2beta1", "v1p3beta1", "v1"] - # ---------------------------------------------------------------------------- # Generate videointelligence GAPIC layer # ---------------------------------------------------------------------------- @@ -39,18 +37,12 @@ include_protos=True, ) - # TODO: stop excluding tests and nox.py (excluded as we lack system tests) s.move( library, excludes=[ "setup.py", - "nox*.py", "README.rst", "docs/index.rst", - f"tests/system/gapic/{version}/" - f"test_system_video_intelligence_service_{version}.py", - # f'tests/unit/gapic/{version}/' - # f'test_video_intelligence_service_client_{version}.py', ], ) s.replace( @@ -60,77 +52,13 @@ "google-cloud-videointelligence", ) -s.replace( - "tests/unit/gapic/**/test_video_intelligence_service_client_*.py", - "^(\s+)expected_request = video_intelligence_pb2.AnnotateVideoRequest\(\)", - "\g<1>expected_request = video_intelligence_pb2.AnnotateVideoRequest(\n" - "\g<1> input_uri=input_uri, features=features)", -) - -# Keep features a keyword param until the microgenerator migration -# https://github.com/googleapis/python-videointelligence/issues/7 -# ------------------------------------------------------------------------------- -s.replace( - "google/cloud/videointelligence_v1/gapic/video_intelligence_service_client.py", - ">>> response = client\.annotate_video\(features, input_uri=input_uri\)", - ">>> response = client.annotate_video(input_uri=input_uri, features=features)", -) -s.replace( - "google/cloud/videointelligence_v1/gapic/video_intelligence_service_client.py", - """(?P\s+features \(list.+?)\n""" - """(?P\s+input_uri \(str\).+?should be unset\.\n)""" - """(?P\s+input_content \(bytes\).+?should be unset\.)""", - """\n\g\g\g""", - re.DOTALL | re.MULTILINE, -) - -s.replace( - "google/cloud/videointelligence_v1/gapic/video_intelligence_service_client.py", - """ self, - features, - input_uri=None, - input_content=None, - video_context=None, - output_uri=None, - location_id=None, - retry=google\.api_core\.gapic_v1\.method\.DEFAULT, - timeout=google\.api_core\.gapic_v1\.method\.DEFAULT, - metadata=None""", - """ self, - input_uri=None, - input_content=None, - # NOTE: Keep features a keyword param that comes after `input_uri` until - # the microgenerator migration to avoid breaking users. - # See https://github.com/googleapis/python-videointelligence/issues/7. - features=None, - video_context=None, - output_uri=None, - location_id=None, - retry=google.api_core.gapic_v1.method.DEFAULT, - timeout=google.api_core.gapic_v1.method.DEFAULT, - metadata=None""", -) - -s.replace( - "tests/**/test_video_intelligence_service_client_v1.py", - "response = client\.annotate_video\(features, input_uri=input_uri\)", - "response = client.annotate_video(input_uri=input_uri, features=features)", -) - -# Add missing blank line before Attributes: in generated docstrings -# Remove after -# https://github.com/googleapis/protoc-docs-plugin/pull/31 -s.replace("google/cloud/**/*_pb2.py", "(\s+)Attributes:", "\n\g<1>Attributes:") - -# Add noindex to types docs to silence warnings about duplicates -# TODO: Remove during microgenerator transition -s.replace("docs/gapic/**/types.rst", "(\s+):members:", "\g<1>:members:\g<1>:noindex:") - # ---------------------------------------------------------------------------- # Add templated files # ---------------------------------------------------------------------------- -templated_files = common.py_library(cov_level=70, samples=True) -s.move(templated_files) +templated_files = common.py_library(cov_level=70, samples=True, microgenerator=True) + +# microgenerator has a good .coveragerc file +s.move(templated_files, excludes=[".coveragerc"]) # ---------------------------------------------------------------------------- # Samples templates diff --git a/packages/google-cloud-videointelligence/tests/system/test_system.py b/packages/google-cloud-videointelligence/tests/system/test_system.py index 6c3ab1733b5f..51f39a7e49d1 100644 --- a/packages/google-cloud-videointelligence/tests/system/test_system.py +++ b/packages/google-cloud-videointelligence/tests/system/test_system.py @@ -38,7 +38,7 @@ def _get_video_uri(): def test_annotate_video(client): - features_element = videointelligence_v1.enums.Feature.LABEL_DETECTION + features_element = videointelligence_v1.Feature.LABEL_DETECTION features = [features_element] response = client.annotate_video(input_uri=_get_video_uri(), features=features) diff --git a/packages/google-cloud-videointelligence/tests/unit/gapic/v1/test_video_intelligence_service_client_v1.py b/packages/google-cloud-videointelligence/tests/unit/gapic/v1/test_video_intelligence_service_client_v1.py deleted file mode 100644 index 7c829cef4d1b..000000000000 --- a/packages/google-cloud-videointelligence/tests/unit/gapic/v1/test_video_intelligence_service_client_v1.py +++ /dev/null @@ -1,123 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Unit tests.""" - -import mock -import pytest - -from google.rpc import status_pb2 - -from google.cloud import videointelligence_v1 -from google.cloud.videointelligence_v1 import enums -from google.cloud.videointelligence_v1.proto import video_intelligence_pb2 -from google.longrunning import operations_pb2 - - -class MultiCallableStub(object): - """Stub for the grpc.UnaryUnaryMultiCallable interface.""" - - def __init__(self, method, channel_stub): - self.method = method - self.channel_stub = channel_stub - - def __call__(self, request, timeout=None, metadata=None, credentials=None): - self.channel_stub.requests.append((self.method, request)) - - response = None - if self.channel_stub.responses: - response = self.channel_stub.responses.pop() - - if isinstance(response, Exception): - raise response - - if response: - return response - - -class ChannelStub(object): - """Stub for the grpc.Channel interface.""" - - def __init__(self, responses=[]): - self.responses = responses - self.requests = [] - - def unary_unary(self, method, request_serializer=None, response_deserializer=None): - return MultiCallableStub(method, self) - - -class CustomException(Exception): - pass - - -class TestVideoIntelligenceServiceClient(object): - def test_annotate_video(self): - # Setup Expected Response - expected_response = {} - expected_response = video_intelligence_pb2.AnnotateVideoResponse( - **expected_response - ) - operation = operations_pb2.Operation( - name="operations/test_annotate_video", done=True - ) - operation.response.Pack(expected_response) - - # Mock the API response - channel = ChannelStub(responses=[operation]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = videointelligence_v1.VideoIntelligenceServiceClient() - - # Setup Request - features_element = enums.Feature.LABEL_DETECTION - features = [features_element] - input_uri = "gs://cloud-samples-data/video/cat.mp4" - - response = client.annotate_video(input_uri=input_uri, features=features) - result = response.result() - assert expected_response == result - - assert len(channel.requests) == 1 - expected_request = video_intelligence_pb2.AnnotateVideoRequest( - features=features, input_uri=input_uri - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_annotate_video_exception(self): - # Setup Response - error = status_pb2.Status() - operation = operations_pb2.Operation( - name="operations/test_annotate_video_exception", done=True - ) - operation.error.CopyFrom(error) - - # Mock the API response - channel = ChannelStub(responses=[operation]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = videointelligence_v1.VideoIntelligenceServiceClient() - - # Setup Request - features_element = enums.Feature.LABEL_DETECTION - features = [features_element] - input_uri = "gs://cloud-samples-data/video/cat.mp4" - - response = client.annotate_video(input_uri=input_uri, features=features) - exception = response.exception() - assert exception.errors[0] == error diff --git a/packages/google-cloud-videointelligence/tests/unit/gapic/v1beta2/test_video_intelligence_service_client_v1beta2.py b/packages/google-cloud-videointelligence/tests/unit/gapic/v1beta2/test_video_intelligence_service_client_v1beta2.py deleted file mode 100644 index febc6a4c648a..000000000000 --- a/packages/google-cloud-videointelligence/tests/unit/gapic/v1beta2/test_video_intelligence_service_client_v1beta2.py +++ /dev/null @@ -1,123 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Unit tests.""" - -import mock -import pytest - -from google.rpc import status_pb2 - -from google.cloud import videointelligence_v1beta2 -from google.cloud.videointelligence_v1beta2 import enums -from google.cloud.videointelligence_v1beta2.proto import video_intelligence_pb2 -from google.longrunning import operations_pb2 - - -class MultiCallableStub(object): - """Stub for the grpc.UnaryUnaryMultiCallable interface.""" - - def __init__(self, method, channel_stub): - self.method = method - self.channel_stub = channel_stub - - def __call__(self, request, timeout=None, metadata=None, credentials=None): - self.channel_stub.requests.append((self.method, request)) - - response = None - if self.channel_stub.responses: - response = self.channel_stub.responses.pop() - - if isinstance(response, Exception): - raise response - - if response: - return response - - -class ChannelStub(object): - """Stub for the grpc.Channel interface.""" - - def __init__(self, responses=[]): - self.responses = responses - self.requests = [] - - def unary_unary(self, method, request_serializer=None, response_deserializer=None): - return MultiCallableStub(method, self) - - -class CustomException(Exception): - pass - - -class TestVideoIntelligenceServiceClient(object): - def test_annotate_video(self): - # Setup Expected Response - expected_response = {} - expected_response = video_intelligence_pb2.AnnotateVideoResponse( - **expected_response - ) - operation = operations_pb2.Operation( - name="operations/test_annotate_video", done=True - ) - operation.response.Pack(expected_response) - - # Mock the API response - channel = ChannelStub(responses=[operation]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = videointelligence_v1beta2.VideoIntelligenceServiceClient() - - # Setup Request - features_element = enums.Feature.LABEL_DETECTION - features = [features_element] - input_uri = "gs://cloud-samples-data/video/cat.mp4" - - response = client.annotate_video(features, input_uri=input_uri) - result = response.result() - assert expected_response == result - - assert len(channel.requests) == 1 - expected_request = video_intelligence_pb2.AnnotateVideoRequest( - features=features, input_uri=input_uri - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_annotate_video_exception(self): - # Setup Response - error = status_pb2.Status() - operation = operations_pb2.Operation( - name="operations/test_annotate_video_exception", done=True - ) - operation.error.CopyFrom(error) - - # Mock the API response - channel = ChannelStub(responses=[operation]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = videointelligence_v1beta2.VideoIntelligenceServiceClient() - - # Setup Request - features_element = enums.Feature.LABEL_DETECTION - features = [features_element] - input_uri = "gs://cloud-samples-data/video/cat.mp4" - - response = client.annotate_video(features, input_uri=input_uri) - exception = response.exception() - assert exception.errors[0] == error diff --git a/packages/google-cloud-videointelligence/tests/unit/gapic/v1p1beta1/test_video_intelligence_service_client_v1p1beta1.py b/packages/google-cloud-videointelligence/tests/unit/gapic/v1p1beta1/test_video_intelligence_service_client_v1p1beta1.py deleted file mode 100644 index 2e86709b3fab..000000000000 --- a/packages/google-cloud-videointelligence/tests/unit/gapic/v1p1beta1/test_video_intelligence_service_client_v1p1beta1.py +++ /dev/null @@ -1,123 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Unit tests.""" - -import mock -import pytest - -from google.rpc import status_pb2 - -from google.cloud import videointelligence_v1p1beta1 -from google.cloud.videointelligence_v1p1beta1 import enums -from google.cloud.videointelligence_v1p1beta1.proto import video_intelligence_pb2 -from google.longrunning import operations_pb2 - - -class MultiCallableStub(object): - """Stub for the grpc.UnaryUnaryMultiCallable interface.""" - - def __init__(self, method, channel_stub): - self.method = method - self.channel_stub = channel_stub - - def __call__(self, request, timeout=None, metadata=None, credentials=None): - self.channel_stub.requests.append((self.method, request)) - - response = None - if self.channel_stub.responses: - response = self.channel_stub.responses.pop() - - if isinstance(response, Exception): - raise response - - if response: - return response - - -class ChannelStub(object): - """Stub for the grpc.Channel interface.""" - - def __init__(self, responses=[]): - self.responses = responses - self.requests = [] - - def unary_unary(self, method, request_serializer=None, response_deserializer=None): - return MultiCallableStub(method, self) - - -class CustomException(Exception): - pass - - -class TestVideoIntelligenceServiceClient(object): - def test_annotate_video(self): - # Setup Expected Response - expected_response = {} - expected_response = video_intelligence_pb2.AnnotateVideoResponse( - **expected_response - ) - operation = operations_pb2.Operation( - name="operations/test_annotate_video", done=True - ) - operation.response.Pack(expected_response) - - # Mock the API response - channel = ChannelStub(responses=[operation]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = videointelligence_v1p1beta1.VideoIntelligenceServiceClient() - - # Setup Request - features_element = enums.Feature.LABEL_DETECTION - features = [features_element] - input_uri = "gs://cloud-samples-data/video/cat.mp4" - - response = client.annotate_video(features, input_uri=input_uri) - result = response.result() - assert expected_response == result - - assert len(channel.requests) == 1 - expected_request = video_intelligence_pb2.AnnotateVideoRequest( - features=features, input_uri=input_uri - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_annotate_video_exception(self): - # Setup Response - error = status_pb2.Status() - operation = operations_pb2.Operation( - name="operations/test_annotate_video_exception", done=True - ) - operation.error.CopyFrom(error) - - # Mock the API response - channel = ChannelStub(responses=[operation]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = videointelligence_v1p1beta1.VideoIntelligenceServiceClient() - - # Setup Request - features_element = enums.Feature.LABEL_DETECTION - features = [features_element] - input_uri = "gs://cloud-samples-data/video/cat.mp4" - - response = client.annotate_video(features, input_uri=input_uri) - exception = response.exception() - assert exception.errors[0] == error diff --git a/packages/google-cloud-videointelligence/tests/unit/gapic/v1p2beta1/test_video_intelligence_service_client_v1p2beta1.py b/packages/google-cloud-videointelligence/tests/unit/gapic/v1p2beta1/test_video_intelligence_service_client_v1p2beta1.py deleted file mode 100644 index 3de5951c714b..000000000000 --- a/packages/google-cloud-videointelligence/tests/unit/gapic/v1p2beta1/test_video_intelligence_service_client_v1p2beta1.py +++ /dev/null @@ -1,123 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Unit tests.""" - -import mock -import pytest - -from google.rpc import status_pb2 - -from google.cloud import videointelligence_v1p2beta1 -from google.cloud.videointelligence_v1p2beta1 import enums -from google.cloud.videointelligence_v1p2beta1.proto import video_intelligence_pb2 -from google.longrunning import operations_pb2 - - -class MultiCallableStub(object): - """Stub for the grpc.UnaryUnaryMultiCallable interface.""" - - def __init__(self, method, channel_stub): - self.method = method - self.channel_stub = channel_stub - - def __call__(self, request, timeout=None, metadata=None, credentials=None): - self.channel_stub.requests.append((self.method, request)) - - response = None - if self.channel_stub.responses: - response = self.channel_stub.responses.pop() - - if isinstance(response, Exception): - raise response - - if response: - return response - - -class ChannelStub(object): - """Stub for the grpc.Channel interface.""" - - def __init__(self, responses=[]): - self.responses = responses - self.requests = [] - - def unary_unary(self, method, request_serializer=None, response_deserializer=None): - return MultiCallableStub(method, self) - - -class CustomException(Exception): - pass - - -class TestVideoIntelligenceServiceClient(object): - def test_annotate_video(self): - # Setup Expected Response - expected_response = {} - expected_response = video_intelligence_pb2.AnnotateVideoResponse( - **expected_response - ) - operation = operations_pb2.Operation( - name="operations/test_annotate_video", done=True - ) - operation.response.Pack(expected_response) - - # Mock the API response - channel = ChannelStub(responses=[operation]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = videointelligence_v1p2beta1.VideoIntelligenceServiceClient() - - # Setup Request - features_element = enums.Feature.LABEL_DETECTION - features = [features_element] - input_uri = "gs://cloud-samples-data/video/cat.mp4" - - response = client.annotate_video(features, input_uri=input_uri) - result = response.result() - assert expected_response == result - - assert len(channel.requests) == 1 - expected_request = video_intelligence_pb2.AnnotateVideoRequest( - features=features, input_uri=input_uri - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_annotate_video_exception(self): - # Setup Response - error = status_pb2.Status() - operation = operations_pb2.Operation( - name="operations/test_annotate_video_exception", done=True - ) - operation.error.CopyFrom(error) - - # Mock the API response - channel = ChannelStub(responses=[operation]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = videointelligence_v1p2beta1.VideoIntelligenceServiceClient() - - # Setup Request - features_element = enums.Feature.LABEL_DETECTION - features = [features_element] - input_uri = "gs://cloud-samples-data/video/cat.mp4" - - response = client.annotate_video(features, input_uri=input_uri) - exception = response.exception() - assert exception.errors[0] == error diff --git a/packages/google-cloud-videointelligence/tests/unit/gapic/v1p3beta1/test_streaming_video_intelligence_service_client_v1p3beta1.py b/packages/google-cloud-videointelligence/tests/unit/gapic/v1p3beta1/test_streaming_video_intelligence_service_client_v1p3beta1.py deleted file mode 100644 index f6f6e7accc6c..000000000000 --- a/packages/google-cloud-videointelligence/tests/unit/gapic/v1p3beta1/test_streaming_video_intelligence_service_client_v1p3beta1.py +++ /dev/null @@ -1,115 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Unit tests.""" - -import mock -import pytest - -from google.cloud import videointelligence_v1p3beta1 -from google.cloud.videointelligence_v1p3beta1.proto import video_intelligence_pb2 - - -class MultiCallableStub(object): - """Stub for the grpc.UnaryUnaryMultiCallable interface.""" - - def __init__(self, method, channel_stub): - self.method = method - self.channel_stub = channel_stub - - def __call__(self, request, timeout=None, metadata=None, credentials=None): - self.channel_stub.requests.append((self.method, request)) - - response = None - if self.channel_stub.responses: - response = self.channel_stub.responses.pop() - - if isinstance(response, Exception): - raise response - - if response: - return response - - -class ChannelStub(object): - """Stub for the grpc.Channel interface.""" - - def __init__(self, responses=[]): - self.responses = responses - self.requests = [] - - def stream_stream( - self, method, request_serializer=None, response_deserializer=None - ): - return MultiCallableStub(method, self) - - -class CustomException(Exception): - pass - - -class TestStreamingVideoIntelligenceServiceClient(object): - def test_streaming_annotate_video(self): - # Setup Expected Response - annotation_results_uri = "annotationResultsUri-238075757" - expected_response = {"annotation_results_uri": annotation_results_uri} - expected_response = video_intelligence_pb2.StreamingAnnotateVideoResponse( - **expected_response - ) - - # Mock the API response - channel = ChannelStub(responses=[iter([expected_response])]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = ( - videointelligence_v1p3beta1.StreamingVideoIntelligenceServiceClient() - ) - - # Setup Request - request = {} - request = video_intelligence_pb2.StreamingAnnotateVideoRequest(**request) - requests = [request] - - response = client.streaming_annotate_video(requests) - resources = list(response) - assert len(resources) == 1 - assert expected_response == resources[0] - - assert len(channel.requests) == 1 - actual_requests = channel.requests[0][1] - assert len(actual_requests) == 1 - actual_request = list(actual_requests)[0] - assert request == actual_request - - def test_streaming_annotate_video_exception(self): - # Mock the API response - channel = ChannelStub(responses=[CustomException()]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = ( - videointelligence_v1p3beta1.StreamingVideoIntelligenceServiceClient() - ) - - # Setup request - request = {} - - request = video_intelligence_pb2.StreamingAnnotateVideoRequest(**request) - requests = [request] - - with pytest.raises(CustomException): - client.streaming_annotate_video(requests) diff --git a/packages/google-cloud-videointelligence/tests/unit/gapic/v1p3beta1/test_video_intelligence_service_client_v1p3beta1.py b/packages/google-cloud-videointelligence/tests/unit/gapic/v1p3beta1/test_video_intelligence_service_client_v1p3beta1.py deleted file mode 100644 index 4f07261a390a..000000000000 --- a/packages/google-cloud-videointelligence/tests/unit/gapic/v1p3beta1/test_video_intelligence_service_client_v1p3beta1.py +++ /dev/null @@ -1,123 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright 2020 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# https://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Unit tests.""" - -import mock -import pytest - -from google.rpc import status_pb2 - -from google.cloud import videointelligence_v1p3beta1 -from google.cloud.videointelligence_v1p3beta1 import enums -from google.cloud.videointelligence_v1p3beta1.proto import video_intelligence_pb2 -from google.longrunning import operations_pb2 - - -class MultiCallableStub(object): - """Stub for the grpc.UnaryUnaryMultiCallable interface.""" - - def __init__(self, method, channel_stub): - self.method = method - self.channel_stub = channel_stub - - def __call__(self, request, timeout=None, metadata=None, credentials=None): - self.channel_stub.requests.append((self.method, request)) - - response = None - if self.channel_stub.responses: - response = self.channel_stub.responses.pop() - - if isinstance(response, Exception): - raise response - - if response: - return response - - -class ChannelStub(object): - """Stub for the grpc.Channel interface.""" - - def __init__(self, responses=[]): - self.responses = responses - self.requests = [] - - def unary_unary(self, method, request_serializer=None, response_deserializer=None): - return MultiCallableStub(method, self) - - -class CustomException(Exception): - pass - - -class TestVideoIntelligenceServiceClient(object): - def test_annotate_video(self): - # Setup Expected Response - expected_response = {} - expected_response = video_intelligence_pb2.AnnotateVideoResponse( - **expected_response - ) - operation = operations_pb2.Operation( - name="operations/test_annotate_video", done=True - ) - operation.response.Pack(expected_response) - - # Mock the API response - channel = ChannelStub(responses=[operation]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = videointelligence_v1p3beta1.VideoIntelligenceServiceClient() - - # Setup Request - features_element = enums.Feature.LABEL_DETECTION - features = [features_element] - input_uri = "gs://cloud-samples-data/video/cat.mp4" - - response = client.annotate_video(features, input_uri=input_uri) - result = response.result() - assert expected_response == result - - assert len(channel.requests) == 1 - expected_request = video_intelligence_pb2.AnnotateVideoRequest( - features=features, input_uri=input_uri - ) - actual_request = channel.requests[0][1] - assert expected_request == actual_request - - def test_annotate_video_exception(self): - # Setup Response - error = status_pb2.Status() - operation = operations_pb2.Operation( - name="operations/test_annotate_video_exception", done=True - ) - operation.error.CopyFrom(error) - - # Mock the API response - channel = ChannelStub(responses=[operation]) - patch = mock.patch("google.api_core.grpc_helpers.create_channel") - with patch as create_channel: - create_channel.return_value = channel - client = videointelligence_v1p3beta1.VideoIntelligenceServiceClient() - - # Setup Request - features_element = enums.Feature.LABEL_DETECTION - features = [features_element] - input_uri = "gs://cloud-samples-data/video/cat.mp4" - - response = client.annotate_video(features, input_uri=input_uri) - exception = response.exception() - assert exception.errors[0] == error diff --git a/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1/__init__.py b/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1/__init__.py new file mode 100644 index 000000000000..8b137891791f --- /dev/null +++ b/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1/__init__.py @@ -0,0 +1 @@ + diff --git a/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1/test_video_intelligence_service.py b/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1/test_video_intelligence_service.py new file mode 100644 index 000000000000..f474b1b55492 --- /dev/null +++ b/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1/test_video_intelligence_service.py @@ -0,0 +1,1079 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import os +import mock + +import grpc +from grpc.experimental import aio +import math +import pytest +from proto.marshal.rules.dates import DurationRule, TimestampRule + +from google import auth +from google.api_core import client_options +from google.api_core import exceptions +from google.api_core import future +from google.api_core import gapic_v1 +from google.api_core import grpc_helpers +from google.api_core import grpc_helpers_async +from google.api_core import operation_async # type: ignore +from google.api_core import operations_v1 +from google.auth import credentials +from google.auth.exceptions import MutualTLSChannelError +from google.cloud.videointelligence_v1.services.video_intelligence_service import ( + VideoIntelligenceServiceAsyncClient, +) +from google.cloud.videointelligence_v1.services.video_intelligence_service import ( + VideoIntelligenceServiceClient, +) +from google.cloud.videointelligence_v1.services.video_intelligence_service import ( + transports, +) +from google.cloud.videointelligence_v1.types import video_intelligence +from google.longrunning import operations_pb2 +from google.oauth2 import service_account +from google.protobuf import duration_pb2 as duration # type: ignore + + +def client_cert_source_callback(): + return b"cert bytes", b"key bytes" + + +# If default endpoint is localhost, then default mtls endpoint will be the same. +# This method modifies the default endpoint so the client can produce a different +# mtls endpoint for endpoint testing purposes. +def modify_default_endpoint(client): + return ( + "foo.googleapis.com" + if ("localhost" in client.DEFAULT_ENDPOINT) + else client.DEFAULT_ENDPOINT + ) + + +def test__get_default_mtls_endpoint(): + api_endpoint = "example.googleapis.com" + api_mtls_endpoint = "example.mtls.googleapis.com" + sandbox_endpoint = "example.sandbox.googleapis.com" + sandbox_mtls_endpoint = "example.mtls.sandbox.googleapis.com" + non_googleapi = "api.example.com" + + assert VideoIntelligenceServiceClient._get_default_mtls_endpoint(None) is None + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(api_endpoint) + == api_mtls_endpoint + ) + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(api_mtls_endpoint) + == api_mtls_endpoint + ) + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(sandbox_endpoint) + == sandbox_mtls_endpoint + ) + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(sandbox_mtls_endpoint) + == sandbox_mtls_endpoint + ) + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(non_googleapi) + == non_googleapi + ) + + +@pytest.mark.parametrize( + "client_class", + [VideoIntelligenceServiceClient, VideoIntelligenceServiceAsyncClient], +) +def test_video_intelligence_service_client_from_service_account_file(client_class): + creds = credentials.AnonymousCredentials() + with mock.patch.object( + service_account.Credentials, "from_service_account_file" + ) as factory: + factory.return_value = creds + client = client_class.from_service_account_file("dummy/file/path.json") + assert client.transport._credentials == creds + + client = client_class.from_service_account_json("dummy/file/path.json") + assert client.transport._credentials == creds + + assert client.transport._host == "videointelligence.googleapis.com:443" + + +def test_video_intelligence_service_client_get_transport_class(): + transport = VideoIntelligenceServiceClient.get_transport_class() + assert transport == transports.VideoIntelligenceServiceGrpcTransport + + transport = VideoIntelligenceServiceClient.get_transport_class("grpc") + assert transport == transports.VideoIntelligenceServiceGrpcTransport + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + ), + ], +) +@mock.patch.object( + VideoIntelligenceServiceClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(VideoIntelligenceServiceClient), +) +@mock.patch.object( + VideoIntelligenceServiceAsyncClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(VideoIntelligenceServiceAsyncClient), +) +def test_video_intelligence_service_client_client_options( + client_class, transport_class, transport_name +): + # Check that if channel is provided we won't create a new one. + with mock.patch.object( + VideoIntelligenceServiceClient, "get_transport_class" + ) as gtc: + transport = transport_class(credentials=credentials.AnonymousCredentials()) + client = client_class(transport=transport) + gtc.assert_not_called() + + # Check that if channel is provided via str we will create a new one. + with mock.patch.object( + VideoIntelligenceServiceClient, "get_transport_class" + ) as gtc: + client = client_class(transport=transport_name) + gtc.assert_called() + + # Check the case api_endpoint is provided. + options = client_options.ClientOptions(api_endpoint="squid.clam.whelk") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host="squid.clam.whelk", + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is + # "never". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}): + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is + # "always". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}): + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_MTLS_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT has + # unsupported value. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "Unsupported"}): + with pytest.raises(MutualTLSChannelError): + client = client_class() + + # Check the case GOOGLE_API_USE_CLIENT_CERTIFICATE has unsupported value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "Unsupported"} + ): + with pytest.raises(ValueError): + client = client_class() + + # Check the case quota_project_id is provided + options = client_options.ClientOptions(quota_project_id="octopus") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id="octopus", + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name,use_client_cert_env", + [ + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + "true", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + "true", + ), + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + "false", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + "false", + ), + ], +) +@mock.patch.object( + VideoIntelligenceServiceClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(VideoIntelligenceServiceClient), +) +@mock.patch.object( + VideoIntelligenceServiceAsyncClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(VideoIntelligenceServiceAsyncClient), +) +@mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "auto"}) +def test_video_intelligence_service_client_mtls_env_auto( + client_class, transport_class, transport_name, use_client_cert_env +): + # This tests the endpoint autoswitch behavior. Endpoint is autoswitched to the default + # mtls endpoint, if GOOGLE_API_USE_CLIENT_CERTIFICATE is "true" and client cert exists. + + # Check the case client_cert_source is provided. Whether client cert is used depends on + # GOOGLE_API_USE_CLIENT_CERTIFICATE value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + options = client_options.ClientOptions( + client_cert_source=client_cert_source_callback + ) + with mock.patch.object(transport_class, "__init__") as patched: + ssl_channel_creds = mock.Mock() + with mock.patch( + "grpc.ssl_channel_credentials", return_value=ssl_channel_creds + ): + patched.return_value = None + client = client_class(client_options=options) + + if use_client_cert_env == "false": + expected_ssl_channel_creds = None + expected_host = client.DEFAULT_ENDPOINT + else: + expected_ssl_channel_creds = ssl_channel_creds + expected_host = client.DEFAULT_MTLS_ENDPOINT + + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=expected_host, + scopes=None, + ssl_channel_credentials=expected_ssl_channel_creds, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case ADC client cert is provided. Whether client cert is used depends on + # GOOGLE_API_USE_CLIENT_CERTIFICATE value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + with mock.patch.object(transport_class, "__init__") as patched: + with mock.patch( + "google.auth.transport.grpc.SslCredentials.__init__", return_value=None + ): + with mock.patch( + "google.auth.transport.grpc.SslCredentials.is_mtls", + new_callable=mock.PropertyMock, + ) as is_mtls_mock: + with mock.patch( + "google.auth.transport.grpc.SslCredentials.ssl_credentials", + new_callable=mock.PropertyMock, + ) as ssl_credentials_mock: + if use_client_cert_env == "false": + is_mtls_mock.return_value = False + ssl_credentials_mock.return_value = None + expected_host = client.DEFAULT_ENDPOINT + expected_ssl_channel_creds = None + else: + is_mtls_mock.return_value = True + ssl_credentials_mock.return_value = mock.Mock() + expected_host = client.DEFAULT_MTLS_ENDPOINT + expected_ssl_channel_creds = ( + ssl_credentials_mock.return_value + ) + + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=expected_host, + scopes=None, + ssl_channel_credentials=expected_ssl_channel_creds, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case client_cert_source and ADC client cert are not provided. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + with mock.patch.object(transport_class, "__init__") as patched: + with mock.patch( + "google.auth.transport.grpc.SslCredentials.__init__", return_value=None + ): + with mock.patch( + "google.auth.transport.grpc.SslCredentials.is_mtls", + new_callable=mock.PropertyMock, + ) as is_mtls_mock: + is_mtls_mock.return_value = False + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + ), + ], +) +def test_video_intelligence_service_client_client_options_scopes( + client_class, transport_class, transport_name +): + # Check the case scopes are provided. + options = client_options.ClientOptions(scopes=["1", "2"],) + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=["1", "2"], + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + ), + ], +) +def test_video_intelligence_service_client_client_options_credentials_file( + client_class, transport_class, transport_name +): + # Check the case credentials file is provided. + options = client_options.ClientOptions(credentials_file="credentials.json") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file="credentials.json", + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +def test_video_intelligence_service_client_client_options_from_dict(): + with mock.patch( + "google.cloud.videointelligence_v1.services.video_intelligence_service.transports.VideoIntelligenceServiceGrpcTransport.__init__" + ) as grpc_transport: + grpc_transport.return_value = None + client = VideoIntelligenceServiceClient( + client_options={"api_endpoint": "squid.clam.whelk"} + ) + grpc_transport.assert_called_once_with( + credentials=None, + credentials_file=None, + host="squid.clam.whelk", + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +def test_annotate_video( + transport: str = "grpc", request_type=video_intelligence.AnnotateVideoRequest +): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.annotate_video), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/spam") + + response = client.annotate_video(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + + assert args[0] == video_intelligence.AnnotateVideoRequest() + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +def test_annotate_video_from_dict(): + test_annotate_video(request_type=dict) + + +@pytest.mark.asyncio +async def test_annotate_video_async( + transport: str = "grpc_asyncio", + request_type=video_intelligence.AnnotateVideoRequest, +): + client = VideoIntelligenceServiceAsyncClient( + credentials=credentials.AnonymousCredentials(), transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.annotate_video), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + + response = await client.annotate_video(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + + assert args[0] == video_intelligence.AnnotateVideoRequest() + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +@pytest.mark.asyncio +async def test_annotate_video_async_from_dict(): + await test_annotate_video_async(request_type=dict) + + +def test_annotate_video_flattened(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.annotate_video), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.annotate_video( + input_uri="input_uri_value", + features=[video_intelligence.Feature.LABEL_DETECTION], + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + + assert args[0].input_uri == "input_uri_value" + + assert args[0].features == [video_intelligence.Feature.LABEL_DETECTION] + + +def test_annotate_video_flattened_error(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.annotate_video( + video_intelligence.AnnotateVideoRequest(), + input_uri="input_uri_value", + features=[video_intelligence.Feature.LABEL_DETECTION], + ) + + +@pytest.mark.asyncio +async def test_annotate_video_flattened_async(): + client = VideoIntelligenceServiceAsyncClient( + credentials=credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.annotate_video), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.annotate_video( + input_uri="input_uri_value", + features=[video_intelligence.Feature.LABEL_DETECTION], + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + + assert args[0].input_uri == "input_uri_value" + + assert args[0].features == [video_intelligence.Feature.LABEL_DETECTION] + + +@pytest.mark.asyncio +async def test_annotate_video_flattened_error_async(): + client = VideoIntelligenceServiceAsyncClient( + credentials=credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.annotate_video( + video_intelligence.AnnotateVideoRequest(), + input_uri="input_uri_value", + features=[video_intelligence.Feature.LABEL_DETECTION], + ) + + +def test_credentials_transport_error(): + # It is an error to provide credentials and a transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), transport=transport, + ) + + # It is an error to provide a credentials file and a transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = VideoIntelligenceServiceClient( + client_options={"credentials_file": "credentials.json"}, + transport=transport, + ) + + # It is an error to provide scopes and a transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = VideoIntelligenceServiceClient( + client_options={"scopes": ["1", "2"]}, transport=transport, + ) + + +def test_transport_instance(): + # A client may be instantiated with a custom transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + client = VideoIntelligenceServiceClient(transport=transport) + assert client.transport is transport + + +def test_transport_get_channel(): + # A client may be instantiated with a custom transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + channel = transport.grpc_channel + assert channel + + transport = transports.VideoIntelligenceServiceGrpcAsyncIOTransport( + credentials=credentials.AnonymousCredentials(), + ) + channel = transport.grpc_channel + assert channel + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.VideoIntelligenceServiceGrpcTransport, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + ], +) +def test_transport_adc(transport_class): + # Test default credentials are used if not provided. + with mock.patch.object(auth, "default") as adc: + adc.return_value = (credentials.AnonymousCredentials(), None) + transport_class() + adc.assert_called_once() + + +def test_transport_grpc_default(): + # A client should use the gRPC transport by default. + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + ) + assert isinstance( + client.transport, transports.VideoIntelligenceServiceGrpcTransport, + ) + + +def test_video_intelligence_service_base_transport_error(): + # Passing both a credentials object and credentials_file should raise an error + with pytest.raises(exceptions.DuplicateCredentialArgs): + transport = transports.VideoIntelligenceServiceTransport( + credentials=credentials.AnonymousCredentials(), + credentials_file="credentials.json", + ) + + +def test_video_intelligence_service_base_transport(): + # Instantiate the base transport. + with mock.patch( + "google.cloud.videointelligence_v1.services.video_intelligence_service.transports.VideoIntelligenceServiceTransport.__init__" + ) as Transport: + Transport.return_value = None + transport = transports.VideoIntelligenceServiceTransport( + credentials=credentials.AnonymousCredentials(), + ) + + # Every method on the transport should just blindly + # raise NotImplementedError. + methods = ("annotate_video",) + for method in methods: + with pytest.raises(NotImplementedError): + getattr(transport, method)(request=object()) + + # Additionally, the LRO client (a property) should + # also raise NotImplementedError + with pytest.raises(NotImplementedError): + transport.operations_client + + +def test_video_intelligence_service_base_transport_with_credentials_file(): + # Instantiate the base transport with a credentials file + with mock.patch.object( + auth, "load_credentials_from_file" + ) as load_creds, mock.patch( + "google.cloud.videointelligence_v1.services.video_intelligence_service.transports.VideoIntelligenceServiceTransport._prep_wrapped_messages" + ) as Transport: + Transport.return_value = None + load_creds.return_value = (credentials.AnonymousCredentials(), None) + transport = transports.VideoIntelligenceServiceTransport( + credentials_file="credentials.json", quota_project_id="octopus", + ) + load_creds.assert_called_once_with( + "credentials.json", + scopes=("https://www.googleapis.com/auth/cloud-platform",), + quota_project_id="octopus", + ) + + +def test_video_intelligence_service_base_transport_with_adc(): + # Test the default credentials are used if credentials and credentials_file are None. + with mock.patch.object(auth, "default") as adc, mock.patch( + "google.cloud.videointelligence_v1.services.video_intelligence_service.transports.VideoIntelligenceServiceTransport._prep_wrapped_messages" + ) as Transport: + Transport.return_value = None + adc.return_value = (credentials.AnonymousCredentials(), None) + transport = transports.VideoIntelligenceServiceTransport() + adc.assert_called_once() + + +def test_video_intelligence_service_auth_adc(): + # If no credentials are provided, we should use ADC credentials. + with mock.patch.object(auth, "default") as adc: + adc.return_value = (credentials.AnonymousCredentials(), None) + VideoIntelligenceServiceClient() + adc.assert_called_once_with( + scopes=("https://www.googleapis.com/auth/cloud-platform",), + quota_project_id=None, + ) + + +def test_video_intelligence_service_transport_auth_adc(): + # If credentials and host are not provided, the transport class should use + # ADC credentials. + with mock.patch.object(auth, "default") as adc: + adc.return_value = (credentials.AnonymousCredentials(), None) + transports.VideoIntelligenceServiceGrpcTransport( + host="squid.clam.whelk", quota_project_id="octopus" + ) + adc.assert_called_once_with( + scopes=("https://www.googleapis.com/auth/cloud-platform",), + quota_project_id="octopus", + ) + + +def test_video_intelligence_service_host_no_port(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + client_options=client_options.ClientOptions( + api_endpoint="videointelligence.googleapis.com" + ), + ) + assert client.transport._host == "videointelligence.googleapis.com:443" + + +def test_video_intelligence_service_host_with_port(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + client_options=client_options.ClientOptions( + api_endpoint="videointelligence.googleapis.com:8000" + ), + ) + assert client.transport._host == "videointelligence.googleapis.com:8000" + + +def test_video_intelligence_service_grpc_transport_channel(): + channel = grpc.insecure_channel("http://localhost/") + + # Check that channel is used if provided. + transport = transports.VideoIntelligenceServiceGrpcTransport( + host="squid.clam.whelk", channel=channel, + ) + assert transport.grpc_channel == channel + assert transport._host == "squid.clam.whelk:443" + assert transport._ssl_channel_credentials == None + + +def test_video_intelligence_service_grpc_asyncio_transport_channel(): + channel = aio.insecure_channel("http://localhost/") + + # Check that channel is used if provided. + transport = transports.VideoIntelligenceServiceGrpcAsyncIOTransport( + host="squid.clam.whelk", channel=channel, + ) + assert transport.grpc_channel == channel + assert transport._host == "squid.clam.whelk:443" + assert transport._ssl_channel_credentials == None + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.VideoIntelligenceServiceGrpcTransport, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + ], +) +def test_video_intelligence_service_transport_channel_mtls_with_client_cert_source( + transport_class, +): + with mock.patch( + "grpc.ssl_channel_credentials", autospec=True + ) as grpc_ssl_channel_cred: + with mock.patch.object( + transport_class, "create_channel", autospec=True + ) as grpc_create_channel: + mock_ssl_cred = mock.Mock() + grpc_ssl_channel_cred.return_value = mock_ssl_cred + + mock_grpc_channel = mock.Mock() + grpc_create_channel.return_value = mock_grpc_channel + + cred = credentials.AnonymousCredentials() + with pytest.warns(DeprecationWarning): + with mock.patch.object(auth, "default") as adc: + adc.return_value = (cred, None) + transport = transport_class( + host="squid.clam.whelk", + api_mtls_endpoint="mtls.squid.clam.whelk", + client_cert_source=client_cert_source_callback, + ) + adc.assert_called_once() + + grpc_ssl_channel_cred.assert_called_once_with( + certificate_chain=b"cert bytes", private_key=b"key bytes" + ) + grpc_create_channel.assert_called_once_with( + "mtls.squid.clam.whelk:443", + credentials=cred, + credentials_file=None, + scopes=("https://www.googleapis.com/auth/cloud-platform",), + ssl_credentials=mock_ssl_cred, + quota_project_id=None, + ) + assert transport.grpc_channel == mock_grpc_channel + assert transport._ssl_channel_credentials == mock_ssl_cred + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.VideoIntelligenceServiceGrpcTransport, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + ], +) +def test_video_intelligence_service_transport_channel_mtls_with_adc(transport_class): + mock_ssl_cred = mock.Mock() + with mock.patch.multiple( + "google.auth.transport.grpc.SslCredentials", + __init__=mock.Mock(return_value=None), + ssl_credentials=mock.PropertyMock(return_value=mock_ssl_cred), + ): + with mock.patch.object( + transport_class, "create_channel", autospec=True + ) as grpc_create_channel: + mock_grpc_channel = mock.Mock() + grpc_create_channel.return_value = mock_grpc_channel + mock_cred = mock.Mock() + + with pytest.warns(DeprecationWarning): + transport = transport_class( + host="squid.clam.whelk", + credentials=mock_cred, + api_mtls_endpoint="mtls.squid.clam.whelk", + client_cert_source=None, + ) + + grpc_create_channel.assert_called_once_with( + "mtls.squid.clam.whelk:443", + credentials=mock_cred, + credentials_file=None, + scopes=("https://www.googleapis.com/auth/cloud-platform",), + ssl_credentials=mock_ssl_cred, + quota_project_id=None, + ) + assert transport.grpc_channel == mock_grpc_channel + + +def test_video_intelligence_service_grpc_lro_client(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), transport="grpc", + ) + transport = client.transport + + # Ensure that we have a api-core operations client. + assert isinstance(transport.operations_client, operations_v1.OperationsClient,) + + # Ensure that subsequent calls to the property send the exact same object. + assert transport.operations_client is transport.operations_client + + +def test_video_intelligence_service_grpc_lro_async_client(): + client = VideoIntelligenceServiceAsyncClient( + credentials=credentials.AnonymousCredentials(), transport="grpc_asyncio", + ) + transport = client.transport + + # Ensure that we have a api-core operations client. + assert isinstance(transport.operations_client, operations_v1.OperationsAsyncClient,) + + # Ensure that subsequent calls to the property send the exact same object. + assert transport.operations_client is transport.operations_client + + +def test_common_billing_account_path(): + billing_account = "squid" + + expected = "billingAccounts/{billing_account}".format( + billing_account=billing_account, + ) + actual = VideoIntelligenceServiceClient.common_billing_account_path(billing_account) + assert expected == actual + + +def test_parse_common_billing_account_path(): + expected = { + "billing_account": "clam", + } + path = VideoIntelligenceServiceClient.common_billing_account_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_billing_account_path(path) + assert expected == actual + + +def test_common_folder_path(): + folder = "whelk" + + expected = "folders/{folder}".format(folder=folder,) + actual = VideoIntelligenceServiceClient.common_folder_path(folder) + assert expected == actual + + +def test_parse_common_folder_path(): + expected = { + "folder": "octopus", + } + path = VideoIntelligenceServiceClient.common_folder_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_folder_path(path) + assert expected == actual + + +def test_common_organization_path(): + organization = "oyster" + + expected = "organizations/{organization}".format(organization=organization,) + actual = VideoIntelligenceServiceClient.common_organization_path(organization) + assert expected == actual + + +def test_parse_common_organization_path(): + expected = { + "organization": "nudibranch", + } + path = VideoIntelligenceServiceClient.common_organization_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_organization_path(path) + assert expected == actual + + +def test_common_project_path(): + project = "cuttlefish" + + expected = "projects/{project}".format(project=project,) + actual = VideoIntelligenceServiceClient.common_project_path(project) + assert expected == actual + + +def test_parse_common_project_path(): + expected = { + "project": "mussel", + } + path = VideoIntelligenceServiceClient.common_project_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_project_path(path) + assert expected == actual + + +def test_common_location_path(): + project = "winkle" + location = "nautilus" + + expected = "projects/{project}/locations/{location}".format( + project=project, location=location, + ) + actual = VideoIntelligenceServiceClient.common_location_path(project, location) + assert expected == actual + + +def test_parse_common_location_path(): + expected = { + "project": "scallop", + "location": "abalone", + } + path = VideoIntelligenceServiceClient.common_location_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_location_path(path) + assert expected == actual + + +def test_client_withDEFAULT_CLIENT_INFO(): + client_info = gapic_v1.client_info.ClientInfo() + + with mock.patch.object( + transports.VideoIntelligenceServiceTransport, "_prep_wrapped_messages" + ) as prep: + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), client_info=client_info, + ) + prep.assert_called_once_with(client_info) + + with mock.patch.object( + transports.VideoIntelligenceServiceTransport, "_prep_wrapped_messages" + ) as prep: + transport_class = VideoIntelligenceServiceClient.get_transport_class() + transport = transport_class( + credentials=credentials.AnonymousCredentials(), client_info=client_info, + ) + prep.assert_called_once_with(client_info) diff --git a/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1beta2/__init__.py b/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1beta2/__init__.py new file mode 100644 index 000000000000..8b137891791f --- /dev/null +++ b/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1beta2/__init__.py @@ -0,0 +1 @@ + diff --git a/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1beta2/test_video_intelligence_service.py b/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1beta2/test_video_intelligence_service.py new file mode 100644 index 000000000000..dd95556e0350 --- /dev/null +++ b/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1beta2/test_video_intelligence_service.py @@ -0,0 +1,1079 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import os +import mock + +import grpc +from grpc.experimental import aio +import math +import pytest +from proto.marshal.rules.dates import DurationRule, TimestampRule + +from google import auth +from google.api_core import client_options +from google.api_core import exceptions +from google.api_core import future +from google.api_core import gapic_v1 +from google.api_core import grpc_helpers +from google.api_core import grpc_helpers_async +from google.api_core import operation_async # type: ignore +from google.api_core import operations_v1 +from google.auth import credentials +from google.auth.exceptions import MutualTLSChannelError +from google.cloud.videointelligence_v1beta2.services.video_intelligence_service import ( + VideoIntelligenceServiceAsyncClient, +) +from google.cloud.videointelligence_v1beta2.services.video_intelligence_service import ( + VideoIntelligenceServiceClient, +) +from google.cloud.videointelligence_v1beta2.services.video_intelligence_service import ( + transports, +) +from google.cloud.videointelligence_v1beta2.types import video_intelligence +from google.longrunning import operations_pb2 +from google.oauth2 import service_account +from google.protobuf import duration_pb2 as duration # type: ignore + + +def client_cert_source_callback(): + return b"cert bytes", b"key bytes" + + +# If default endpoint is localhost, then default mtls endpoint will be the same. +# This method modifies the default endpoint so the client can produce a different +# mtls endpoint for endpoint testing purposes. +def modify_default_endpoint(client): + return ( + "foo.googleapis.com" + if ("localhost" in client.DEFAULT_ENDPOINT) + else client.DEFAULT_ENDPOINT + ) + + +def test__get_default_mtls_endpoint(): + api_endpoint = "example.googleapis.com" + api_mtls_endpoint = "example.mtls.googleapis.com" + sandbox_endpoint = "example.sandbox.googleapis.com" + sandbox_mtls_endpoint = "example.mtls.sandbox.googleapis.com" + non_googleapi = "api.example.com" + + assert VideoIntelligenceServiceClient._get_default_mtls_endpoint(None) is None + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(api_endpoint) + == api_mtls_endpoint + ) + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(api_mtls_endpoint) + == api_mtls_endpoint + ) + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(sandbox_endpoint) + == sandbox_mtls_endpoint + ) + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(sandbox_mtls_endpoint) + == sandbox_mtls_endpoint + ) + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(non_googleapi) + == non_googleapi + ) + + +@pytest.mark.parametrize( + "client_class", + [VideoIntelligenceServiceClient, VideoIntelligenceServiceAsyncClient], +) +def test_video_intelligence_service_client_from_service_account_file(client_class): + creds = credentials.AnonymousCredentials() + with mock.patch.object( + service_account.Credentials, "from_service_account_file" + ) as factory: + factory.return_value = creds + client = client_class.from_service_account_file("dummy/file/path.json") + assert client.transport._credentials == creds + + client = client_class.from_service_account_json("dummy/file/path.json") + assert client.transport._credentials == creds + + assert client.transport._host == "videointelligence.googleapis.com:443" + + +def test_video_intelligence_service_client_get_transport_class(): + transport = VideoIntelligenceServiceClient.get_transport_class() + assert transport == transports.VideoIntelligenceServiceGrpcTransport + + transport = VideoIntelligenceServiceClient.get_transport_class("grpc") + assert transport == transports.VideoIntelligenceServiceGrpcTransport + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + ), + ], +) +@mock.patch.object( + VideoIntelligenceServiceClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(VideoIntelligenceServiceClient), +) +@mock.patch.object( + VideoIntelligenceServiceAsyncClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(VideoIntelligenceServiceAsyncClient), +) +def test_video_intelligence_service_client_client_options( + client_class, transport_class, transport_name +): + # Check that if channel is provided we won't create a new one. + with mock.patch.object( + VideoIntelligenceServiceClient, "get_transport_class" + ) as gtc: + transport = transport_class(credentials=credentials.AnonymousCredentials()) + client = client_class(transport=transport) + gtc.assert_not_called() + + # Check that if channel is provided via str we will create a new one. + with mock.patch.object( + VideoIntelligenceServiceClient, "get_transport_class" + ) as gtc: + client = client_class(transport=transport_name) + gtc.assert_called() + + # Check the case api_endpoint is provided. + options = client_options.ClientOptions(api_endpoint="squid.clam.whelk") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host="squid.clam.whelk", + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is + # "never". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}): + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is + # "always". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}): + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_MTLS_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT has + # unsupported value. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "Unsupported"}): + with pytest.raises(MutualTLSChannelError): + client = client_class() + + # Check the case GOOGLE_API_USE_CLIENT_CERTIFICATE has unsupported value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "Unsupported"} + ): + with pytest.raises(ValueError): + client = client_class() + + # Check the case quota_project_id is provided + options = client_options.ClientOptions(quota_project_id="octopus") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id="octopus", + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name,use_client_cert_env", + [ + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + "true", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + "true", + ), + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + "false", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + "false", + ), + ], +) +@mock.patch.object( + VideoIntelligenceServiceClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(VideoIntelligenceServiceClient), +) +@mock.patch.object( + VideoIntelligenceServiceAsyncClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(VideoIntelligenceServiceAsyncClient), +) +@mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "auto"}) +def test_video_intelligence_service_client_mtls_env_auto( + client_class, transport_class, transport_name, use_client_cert_env +): + # This tests the endpoint autoswitch behavior. Endpoint is autoswitched to the default + # mtls endpoint, if GOOGLE_API_USE_CLIENT_CERTIFICATE is "true" and client cert exists. + + # Check the case client_cert_source is provided. Whether client cert is used depends on + # GOOGLE_API_USE_CLIENT_CERTIFICATE value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + options = client_options.ClientOptions( + client_cert_source=client_cert_source_callback + ) + with mock.patch.object(transport_class, "__init__") as patched: + ssl_channel_creds = mock.Mock() + with mock.patch( + "grpc.ssl_channel_credentials", return_value=ssl_channel_creds + ): + patched.return_value = None + client = client_class(client_options=options) + + if use_client_cert_env == "false": + expected_ssl_channel_creds = None + expected_host = client.DEFAULT_ENDPOINT + else: + expected_ssl_channel_creds = ssl_channel_creds + expected_host = client.DEFAULT_MTLS_ENDPOINT + + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=expected_host, + scopes=None, + ssl_channel_credentials=expected_ssl_channel_creds, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case ADC client cert is provided. Whether client cert is used depends on + # GOOGLE_API_USE_CLIENT_CERTIFICATE value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + with mock.patch.object(transport_class, "__init__") as patched: + with mock.patch( + "google.auth.transport.grpc.SslCredentials.__init__", return_value=None + ): + with mock.patch( + "google.auth.transport.grpc.SslCredentials.is_mtls", + new_callable=mock.PropertyMock, + ) as is_mtls_mock: + with mock.patch( + "google.auth.transport.grpc.SslCredentials.ssl_credentials", + new_callable=mock.PropertyMock, + ) as ssl_credentials_mock: + if use_client_cert_env == "false": + is_mtls_mock.return_value = False + ssl_credentials_mock.return_value = None + expected_host = client.DEFAULT_ENDPOINT + expected_ssl_channel_creds = None + else: + is_mtls_mock.return_value = True + ssl_credentials_mock.return_value = mock.Mock() + expected_host = client.DEFAULT_MTLS_ENDPOINT + expected_ssl_channel_creds = ( + ssl_credentials_mock.return_value + ) + + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=expected_host, + scopes=None, + ssl_channel_credentials=expected_ssl_channel_creds, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case client_cert_source and ADC client cert are not provided. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + with mock.patch.object(transport_class, "__init__") as patched: + with mock.patch( + "google.auth.transport.grpc.SslCredentials.__init__", return_value=None + ): + with mock.patch( + "google.auth.transport.grpc.SslCredentials.is_mtls", + new_callable=mock.PropertyMock, + ) as is_mtls_mock: + is_mtls_mock.return_value = False + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + ), + ], +) +def test_video_intelligence_service_client_client_options_scopes( + client_class, transport_class, transport_name +): + # Check the case scopes are provided. + options = client_options.ClientOptions(scopes=["1", "2"],) + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=["1", "2"], + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + ), + ], +) +def test_video_intelligence_service_client_client_options_credentials_file( + client_class, transport_class, transport_name +): + # Check the case credentials file is provided. + options = client_options.ClientOptions(credentials_file="credentials.json") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file="credentials.json", + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +def test_video_intelligence_service_client_client_options_from_dict(): + with mock.patch( + "google.cloud.videointelligence_v1beta2.services.video_intelligence_service.transports.VideoIntelligenceServiceGrpcTransport.__init__" + ) as grpc_transport: + grpc_transport.return_value = None + client = VideoIntelligenceServiceClient( + client_options={"api_endpoint": "squid.clam.whelk"} + ) + grpc_transport.assert_called_once_with( + credentials=None, + credentials_file=None, + host="squid.clam.whelk", + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +def test_annotate_video( + transport: str = "grpc", request_type=video_intelligence.AnnotateVideoRequest +): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.annotate_video), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/spam") + + response = client.annotate_video(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + + assert args[0] == video_intelligence.AnnotateVideoRequest() + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +def test_annotate_video_from_dict(): + test_annotate_video(request_type=dict) + + +@pytest.mark.asyncio +async def test_annotate_video_async( + transport: str = "grpc_asyncio", + request_type=video_intelligence.AnnotateVideoRequest, +): + client = VideoIntelligenceServiceAsyncClient( + credentials=credentials.AnonymousCredentials(), transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.annotate_video), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + + response = await client.annotate_video(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + + assert args[0] == video_intelligence.AnnotateVideoRequest() + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +@pytest.mark.asyncio +async def test_annotate_video_async_from_dict(): + await test_annotate_video_async(request_type=dict) + + +def test_annotate_video_flattened(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.annotate_video), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.annotate_video( + input_uri="input_uri_value", + features=[video_intelligence.Feature.LABEL_DETECTION], + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + + assert args[0].input_uri == "input_uri_value" + + assert args[0].features == [video_intelligence.Feature.LABEL_DETECTION] + + +def test_annotate_video_flattened_error(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.annotate_video( + video_intelligence.AnnotateVideoRequest(), + input_uri="input_uri_value", + features=[video_intelligence.Feature.LABEL_DETECTION], + ) + + +@pytest.mark.asyncio +async def test_annotate_video_flattened_async(): + client = VideoIntelligenceServiceAsyncClient( + credentials=credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.annotate_video), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.annotate_video( + input_uri="input_uri_value", + features=[video_intelligence.Feature.LABEL_DETECTION], + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + + assert args[0].input_uri == "input_uri_value" + + assert args[0].features == [video_intelligence.Feature.LABEL_DETECTION] + + +@pytest.mark.asyncio +async def test_annotate_video_flattened_error_async(): + client = VideoIntelligenceServiceAsyncClient( + credentials=credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.annotate_video( + video_intelligence.AnnotateVideoRequest(), + input_uri="input_uri_value", + features=[video_intelligence.Feature.LABEL_DETECTION], + ) + + +def test_credentials_transport_error(): + # It is an error to provide credentials and a transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), transport=transport, + ) + + # It is an error to provide a credentials file and a transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = VideoIntelligenceServiceClient( + client_options={"credentials_file": "credentials.json"}, + transport=transport, + ) + + # It is an error to provide scopes and a transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = VideoIntelligenceServiceClient( + client_options={"scopes": ["1", "2"]}, transport=transport, + ) + + +def test_transport_instance(): + # A client may be instantiated with a custom transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + client = VideoIntelligenceServiceClient(transport=transport) + assert client.transport is transport + + +def test_transport_get_channel(): + # A client may be instantiated with a custom transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + channel = transport.grpc_channel + assert channel + + transport = transports.VideoIntelligenceServiceGrpcAsyncIOTransport( + credentials=credentials.AnonymousCredentials(), + ) + channel = transport.grpc_channel + assert channel + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.VideoIntelligenceServiceGrpcTransport, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + ], +) +def test_transport_adc(transport_class): + # Test default credentials are used if not provided. + with mock.patch.object(auth, "default") as adc: + adc.return_value = (credentials.AnonymousCredentials(), None) + transport_class() + adc.assert_called_once() + + +def test_transport_grpc_default(): + # A client should use the gRPC transport by default. + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + ) + assert isinstance( + client.transport, transports.VideoIntelligenceServiceGrpcTransport, + ) + + +def test_video_intelligence_service_base_transport_error(): + # Passing both a credentials object and credentials_file should raise an error + with pytest.raises(exceptions.DuplicateCredentialArgs): + transport = transports.VideoIntelligenceServiceTransport( + credentials=credentials.AnonymousCredentials(), + credentials_file="credentials.json", + ) + + +def test_video_intelligence_service_base_transport(): + # Instantiate the base transport. + with mock.patch( + "google.cloud.videointelligence_v1beta2.services.video_intelligence_service.transports.VideoIntelligenceServiceTransport.__init__" + ) as Transport: + Transport.return_value = None + transport = transports.VideoIntelligenceServiceTransport( + credentials=credentials.AnonymousCredentials(), + ) + + # Every method on the transport should just blindly + # raise NotImplementedError. + methods = ("annotate_video",) + for method in methods: + with pytest.raises(NotImplementedError): + getattr(transport, method)(request=object()) + + # Additionally, the LRO client (a property) should + # also raise NotImplementedError + with pytest.raises(NotImplementedError): + transport.operations_client + + +def test_video_intelligence_service_base_transport_with_credentials_file(): + # Instantiate the base transport with a credentials file + with mock.patch.object( + auth, "load_credentials_from_file" + ) as load_creds, mock.patch( + "google.cloud.videointelligence_v1beta2.services.video_intelligence_service.transports.VideoIntelligenceServiceTransport._prep_wrapped_messages" + ) as Transport: + Transport.return_value = None + load_creds.return_value = (credentials.AnonymousCredentials(), None) + transport = transports.VideoIntelligenceServiceTransport( + credentials_file="credentials.json", quota_project_id="octopus", + ) + load_creds.assert_called_once_with( + "credentials.json", + scopes=("https://www.googleapis.com/auth/cloud-platform",), + quota_project_id="octopus", + ) + + +def test_video_intelligence_service_base_transport_with_adc(): + # Test the default credentials are used if credentials and credentials_file are None. + with mock.patch.object(auth, "default") as adc, mock.patch( + "google.cloud.videointelligence_v1beta2.services.video_intelligence_service.transports.VideoIntelligenceServiceTransport._prep_wrapped_messages" + ) as Transport: + Transport.return_value = None + adc.return_value = (credentials.AnonymousCredentials(), None) + transport = transports.VideoIntelligenceServiceTransport() + adc.assert_called_once() + + +def test_video_intelligence_service_auth_adc(): + # If no credentials are provided, we should use ADC credentials. + with mock.patch.object(auth, "default") as adc: + adc.return_value = (credentials.AnonymousCredentials(), None) + VideoIntelligenceServiceClient() + adc.assert_called_once_with( + scopes=("https://www.googleapis.com/auth/cloud-platform",), + quota_project_id=None, + ) + + +def test_video_intelligence_service_transport_auth_adc(): + # If credentials and host are not provided, the transport class should use + # ADC credentials. + with mock.patch.object(auth, "default") as adc: + adc.return_value = (credentials.AnonymousCredentials(), None) + transports.VideoIntelligenceServiceGrpcTransport( + host="squid.clam.whelk", quota_project_id="octopus" + ) + adc.assert_called_once_with( + scopes=("https://www.googleapis.com/auth/cloud-platform",), + quota_project_id="octopus", + ) + + +def test_video_intelligence_service_host_no_port(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + client_options=client_options.ClientOptions( + api_endpoint="videointelligence.googleapis.com" + ), + ) + assert client.transport._host == "videointelligence.googleapis.com:443" + + +def test_video_intelligence_service_host_with_port(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + client_options=client_options.ClientOptions( + api_endpoint="videointelligence.googleapis.com:8000" + ), + ) + assert client.transport._host == "videointelligence.googleapis.com:8000" + + +def test_video_intelligence_service_grpc_transport_channel(): + channel = grpc.insecure_channel("http://localhost/") + + # Check that channel is used if provided. + transport = transports.VideoIntelligenceServiceGrpcTransport( + host="squid.clam.whelk", channel=channel, + ) + assert transport.grpc_channel == channel + assert transport._host == "squid.clam.whelk:443" + assert transport._ssl_channel_credentials == None + + +def test_video_intelligence_service_grpc_asyncio_transport_channel(): + channel = aio.insecure_channel("http://localhost/") + + # Check that channel is used if provided. + transport = transports.VideoIntelligenceServiceGrpcAsyncIOTransport( + host="squid.clam.whelk", channel=channel, + ) + assert transport.grpc_channel == channel + assert transport._host == "squid.clam.whelk:443" + assert transport._ssl_channel_credentials == None + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.VideoIntelligenceServiceGrpcTransport, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + ], +) +def test_video_intelligence_service_transport_channel_mtls_with_client_cert_source( + transport_class, +): + with mock.patch( + "grpc.ssl_channel_credentials", autospec=True + ) as grpc_ssl_channel_cred: + with mock.patch.object( + transport_class, "create_channel", autospec=True + ) as grpc_create_channel: + mock_ssl_cred = mock.Mock() + grpc_ssl_channel_cred.return_value = mock_ssl_cred + + mock_grpc_channel = mock.Mock() + grpc_create_channel.return_value = mock_grpc_channel + + cred = credentials.AnonymousCredentials() + with pytest.warns(DeprecationWarning): + with mock.patch.object(auth, "default") as adc: + adc.return_value = (cred, None) + transport = transport_class( + host="squid.clam.whelk", + api_mtls_endpoint="mtls.squid.clam.whelk", + client_cert_source=client_cert_source_callback, + ) + adc.assert_called_once() + + grpc_ssl_channel_cred.assert_called_once_with( + certificate_chain=b"cert bytes", private_key=b"key bytes" + ) + grpc_create_channel.assert_called_once_with( + "mtls.squid.clam.whelk:443", + credentials=cred, + credentials_file=None, + scopes=("https://www.googleapis.com/auth/cloud-platform",), + ssl_credentials=mock_ssl_cred, + quota_project_id=None, + ) + assert transport.grpc_channel == mock_grpc_channel + assert transport._ssl_channel_credentials == mock_ssl_cred + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.VideoIntelligenceServiceGrpcTransport, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + ], +) +def test_video_intelligence_service_transport_channel_mtls_with_adc(transport_class): + mock_ssl_cred = mock.Mock() + with mock.patch.multiple( + "google.auth.transport.grpc.SslCredentials", + __init__=mock.Mock(return_value=None), + ssl_credentials=mock.PropertyMock(return_value=mock_ssl_cred), + ): + with mock.patch.object( + transport_class, "create_channel", autospec=True + ) as grpc_create_channel: + mock_grpc_channel = mock.Mock() + grpc_create_channel.return_value = mock_grpc_channel + mock_cred = mock.Mock() + + with pytest.warns(DeprecationWarning): + transport = transport_class( + host="squid.clam.whelk", + credentials=mock_cred, + api_mtls_endpoint="mtls.squid.clam.whelk", + client_cert_source=None, + ) + + grpc_create_channel.assert_called_once_with( + "mtls.squid.clam.whelk:443", + credentials=mock_cred, + credentials_file=None, + scopes=("https://www.googleapis.com/auth/cloud-platform",), + ssl_credentials=mock_ssl_cred, + quota_project_id=None, + ) + assert transport.grpc_channel == mock_grpc_channel + + +def test_video_intelligence_service_grpc_lro_client(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), transport="grpc", + ) + transport = client.transport + + # Ensure that we have a api-core operations client. + assert isinstance(transport.operations_client, operations_v1.OperationsClient,) + + # Ensure that subsequent calls to the property send the exact same object. + assert transport.operations_client is transport.operations_client + + +def test_video_intelligence_service_grpc_lro_async_client(): + client = VideoIntelligenceServiceAsyncClient( + credentials=credentials.AnonymousCredentials(), transport="grpc_asyncio", + ) + transport = client.transport + + # Ensure that we have a api-core operations client. + assert isinstance(transport.operations_client, operations_v1.OperationsAsyncClient,) + + # Ensure that subsequent calls to the property send the exact same object. + assert transport.operations_client is transport.operations_client + + +def test_common_billing_account_path(): + billing_account = "squid" + + expected = "billingAccounts/{billing_account}".format( + billing_account=billing_account, + ) + actual = VideoIntelligenceServiceClient.common_billing_account_path(billing_account) + assert expected == actual + + +def test_parse_common_billing_account_path(): + expected = { + "billing_account": "clam", + } + path = VideoIntelligenceServiceClient.common_billing_account_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_billing_account_path(path) + assert expected == actual + + +def test_common_folder_path(): + folder = "whelk" + + expected = "folders/{folder}".format(folder=folder,) + actual = VideoIntelligenceServiceClient.common_folder_path(folder) + assert expected == actual + + +def test_parse_common_folder_path(): + expected = { + "folder": "octopus", + } + path = VideoIntelligenceServiceClient.common_folder_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_folder_path(path) + assert expected == actual + + +def test_common_organization_path(): + organization = "oyster" + + expected = "organizations/{organization}".format(organization=organization,) + actual = VideoIntelligenceServiceClient.common_organization_path(organization) + assert expected == actual + + +def test_parse_common_organization_path(): + expected = { + "organization": "nudibranch", + } + path = VideoIntelligenceServiceClient.common_organization_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_organization_path(path) + assert expected == actual + + +def test_common_project_path(): + project = "cuttlefish" + + expected = "projects/{project}".format(project=project,) + actual = VideoIntelligenceServiceClient.common_project_path(project) + assert expected == actual + + +def test_parse_common_project_path(): + expected = { + "project": "mussel", + } + path = VideoIntelligenceServiceClient.common_project_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_project_path(path) + assert expected == actual + + +def test_common_location_path(): + project = "winkle" + location = "nautilus" + + expected = "projects/{project}/locations/{location}".format( + project=project, location=location, + ) + actual = VideoIntelligenceServiceClient.common_location_path(project, location) + assert expected == actual + + +def test_parse_common_location_path(): + expected = { + "project": "scallop", + "location": "abalone", + } + path = VideoIntelligenceServiceClient.common_location_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_location_path(path) + assert expected == actual + + +def test_client_withDEFAULT_CLIENT_INFO(): + client_info = gapic_v1.client_info.ClientInfo() + + with mock.patch.object( + transports.VideoIntelligenceServiceTransport, "_prep_wrapped_messages" + ) as prep: + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), client_info=client_info, + ) + prep.assert_called_once_with(client_info) + + with mock.patch.object( + transports.VideoIntelligenceServiceTransport, "_prep_wrapped_messages" + ) as prep: + transport_class = VideoIntelligenceServiceClient.get_transport_class() + transport = transport_class( + credentials=credentials.AnonymousCredentials(), client_info=client_info, + ) + prep.assert_called_once_with(client_info) diff --git a/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p1beta1/__init__.py b/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p1beta1/__init__.py new file mode 100644 index 000000000000..8b137891791f --- /dev/null +++ b/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p1beta1/__init__.py @@ -0,0 +1 @@ + diff --git a/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p1beta1/test_video_intelligence_service.py b/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p1beta1/test_video_intelligence_service.py new file mode 100644 index 000000000000..be27ec5e6d3b --- /dev/null +++ b/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p1beta1/test_video_intelligence_service.py @@ -0,0 +1,1079 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import os +import mock + +import grpc +from grpc.experimental import aio +import math +import pytest +from proto.marshal.rules.dates import DurationRule, TimestampRule + +from google import auth +from google.api_core import client_options +from google.api_core import exceptions +from google.api_core import future +from google.api_core import gapic_v1 +from google.api_core import grpc_helpers +from google.api_core import grpc_helpers_async +from google.api_core import operation_async # type: ignore +from google.api_core import operations_v1 +from google.auth import credentials +from google.auth.exceptions import MutualTLSChannelError +from google.cloud.videointelligence_v1p1beta1.services.video_intelligence_service import ( + VideoIntelligenceServiceAsyncClient, +) +from google.cloud.videointelligence_v1p1beta1.services.video_intelligence_service import ( + VideoIntelligenceServiceClient, +) +from google.cloud.videointelligence_v1p1beta1.services.video_intelligence_service import ( + transports, +) +from google.cloud.videointelligence_v1p1beta1.types import video_intelligence +from google.longrunning import operations_pb2 +from google.oauth2 import service_account +from google.protobuf import duration_pb2 as duration # type: ignore + + +def client_cert_source_callback(): + return b"cert bytes", b"key bytes" + + +# If default endpoint is localhost, then default mtls endpoint will be the same. +# This method modifies the default endpoint so the client can produce a different +# mtls endpoint for endpoint testing purposes. +def modify_default_endpoint(client): + return ( + "foo.googleapis.com" + if ("localhost" in client.DEFAULT_ENDPOINT) + else client.DEFAULT_ENDPOINT + ) + + +def test__get_default_mtls_endpoint(): + api_endpoint = "example.googleapis.com" + api_mtls_endpoint = "example.mtls.googleapis.com" + sandbox_endpoint = "example.sandbox.googleapis.com" + sandbox_mtls_endpoint = "example.mtls.sandbox.googleapis.com" + non_googleapi = "api.example.com" + + assert VideoIntelligenceServiceClient._get_default_mtls_endpoint(None) is None + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(api_endpoint) + == api_mtls_endpoint + ) + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(api_mtls_endpoint) + == api_mtls_endpoint + ) + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(sandbox_endpoint) + == sandbox_mtls_endpoint + ) + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(sandbox_mtls_endpoint) + == sandbox_mtls_endpoint + ) + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(non_googleapi) + == non_googleapi + ) + + +@pytest.mark.parametrize( + "client_class", + [VideoIntelligenceServiceClient, VideoIntelligenceServiceAsyncClient], +) +def test_video_intelligence_service_client_from_service_account_file(client_class): + creds = credentials.AnonymousCredentials() + with mock.patch.object( + service_account.Credentials, "from_service_account_file" + ) as factory: + factory.return_value = creds + client = client_class.from_service_account_file("dummy/file/path.json") + assert client.transport._credentials == creds + + client = client_class.from_service_account_json("dummy/file/path.json") + assert client.transport._credentials == creds + + assert client.transport._host == "videointelligence.googleapis.com:443" + + +def test_video_intelligence_service_client_get_transport_class(): + transport = VideoIntelligenceServiceClient.get_transport_class() + assert transport == transports.VideoIntelligenceServiceGrpcTransport + + transport = VideoIntelligenceServiceClient.get_transport_class("grpc") + assert transport == transports.VideoIntelligenceServiceGrpcTransport + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + ), + ], +) +@mock.patch.object( + VideoIntelligenceServiceClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(VideoIntelligenceServiceClient), +) +@mock.patch.object( + VideoIntelligenceServiceAsyncClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(VideoIntelligenceServiceAsyncClient), +) +def test_video_intelligence_service_client_client_options( + client_class, transport_class, transport_name +): + # Check that if channel is provided we won't create a new one. + with mock.patch.object( + VideoIntelligenceServiceClient, "get_transport_class" + ) as gtc: + transport = transport_class(credentials=credentials.AnonymousCredentials()) + client = client_class(transport=transport) + gtc.assert_not_called() + + # Check that if channel is provided via str we will create a new one. + with mock.patch.object( + VideoIntelligenceServiceClient, "get_transport_class" + ) as gtc: + client = client_class(transport=transport_name) + gtc.assert_called() + + # Check the case api_endpoint is provided. + options = client_options.ClientOptions(api_endpoint="squid.clam.whelk") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host="squid.clam.whelk", + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is + # "never". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}): + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is + # "always". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}): + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_MTLS_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT has + # unsupported value. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "Unsupported"}): + with pytest.raises(MutualTLSChannelError): + client = client_class() + + # Check the case GOOGLE_API_USE_CLIENT_CERTIFICATE has unsupported value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "Unsupported"} + ): + with pytest.raises(ValueError): + client = client_class() + + # Check the case quota_project_id is provided + options = client_options.ClientOptions(quota_project_id="octopus") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id="octopus", + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name,use_client_cert_env", + [ + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + "true", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + "true", + ), + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + "false", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + "false", + ), + ], +) +@mock.patch.object( + VideoIntelligenceServiceClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(VideoIntelligenceServiceClient), +) +@mock.patch.object( + VideoIntelligenceServiceAsyncClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(VideoIntelligenceServiceAsyncClient), +) +@mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "auto"}) +def test_video_intelligence_service_client_mtls_env_auto( + client_class, transport_class, transport_name, use_client_cert_env +): + # This tests the endpoint autoswitch behavior. Endpoint is autoswitched to the default + # mtls endpoint, if GOOGLE_API_USE_CLIENT_CERTIFICATE is "true" and client cert exists. + + # Check the case client_cert_source is provided. Whether client cert is used depends on + # GOOGLE_API_USE_CLIENT_CERTIFICATE value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + options = client_options.ClientOptions( + client_cert_source=client_cert_source_callback + ) + with mock.patch.object(transport_class, "__init__") as patched: + ssl_channel_creds = mock.Mock() + with mock.patch( + "grpc.ssl_channel_credentials", return_value=ssl_channel_creds + ): + patched.return_value = None + client = client_class(client_options=options) + + if use_client_cert_env == "false": + expected_ssl_channel_creds = None + expected_host = client.DEFAULT_ENDPOINT + else: + expected_ssl_channel_creds = ssl_channel_creds + expected_host = client.DEFAULT_MTLS_ENDPOINT + + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=expected_host, + scopes=None, + ssl_channel_credentials=expected_ssl_channel_creds, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case ADC client cert is provided. Whether client cert is used depends on + # GOOGLE_API_USE_CLIENT_CERTIFICATE value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + with mock.patch.object(transport_class, "__init__") as patched: + with mock.patch( + "google.auth.transport.grpc.SslCredentials.__init__", return_value=None + ): + with mock.patch( + "google.auth.transport.grpc.SslCredentials.is_mtls", + new_callable=mock.PropertyMock, + ) as is_mtls_mock: + with mock.patch( + "google.auth.transport.grpc.SslCredentials.ssl_credentials", + new_callable=mock.PropertyMock, + ) as ssl_credentials_mock: + if use_client_cert_env == "false": + is_mtls_mock.return_value = False + ssl_credentials_mock.return_value = None + expected_host = client.DEFAULT_ENDPOINT + expected_ssl_channel_creds = None + else: + is_mtls_mock.return_value = True + ssl_credentials_mock.return_value = mock.Mock() + expected_host = client.DEFAULT_MTLS_ENDPOINT + expected_ssl_channel_creds = ( + ssl_credentials_mock.return_value + ) + + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=expected_host, + scopes=None, + ssl_channel_credentials=expected_ssl_channel_creds, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case client_cert_source and ADC client cert are not provided. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + with mock.patch.object(transport_class, "__init__") as patched: + with mock.patch( + "google.auth.transport.grpc.SslCredentials.__init__", return_value=None + ): + with mock.patch( + "google.auth.transport.grpc.SslCredentials.is_mtls", + new_callable=mock.PropertyMock, + ) as is_mtls_mock: + is_mtls_mock.return_value = False + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + ), + ], +) +def test_video_intelligence_service_client_client_options_scopes( + client_class, transport_class, transport_name +): + # Check the case scopes are provided. + options = client_options.ClientOptions(scopes=["1", "2"],) + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=["1", "2"], + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + ), + ], +) +def test_video_intelligence_service_client_client_options_credentials_file( + client_class, transport_class, transport_name +): + # Check the case credentials file is provided. + options = client_options.ClientOptions(credentials_file="credentials.json") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file="credentials.json", + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +def test_video_intelligence_service_client_client_options_from_dict(): + with mock.patch( + "google.cloud.videointelligence_v1p1beta1.services.video_intelligence_service.transports.VideoIntelligenceServiceGrpcTransport.__init__" + ) as grpc_transport: + grpc_transport.return_value = None + client = VideoIntelligenceServiceClient( + client_options={"api_endpoint": "squid.clam.whelk"} + ) + grpc_transport.assert_called_once_with( + credentials=None, + credentials_file=None, + host="squid.clam.whelk", + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +def test_annotate_video( + transport: str = "grpc", request_type=video_intelligence.AnnotateVideoRequest +): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.annotate_video), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/spam") + + response = client.annotate_video(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + + assert args[0] == video_intelligence.AnnotateVideoRequest() + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +def test_annotate_video_from_dict(): + test_annotate_video(request_type=dict) + + +@pytest.mark.asyncio +async def test_annotate_video_async( + transport: str = "grpc_asyncio", + request_type=video_intelligence.AnnotateVideoRequest, +): + client = VideoIntelligenceServiceAsyncClient( + credentials=credentials.AnonymousCredentials(), transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.annotate_video), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + + response = await client.annotate_video(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + + assert args[0] == video_intelligence.AnnotateVideoRequest() + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +@pytest.mark.asyncio +async def test_annotate_video_async_from_dict(): + await test_annotate_video_async(request_type=dict) + + +def test_annotate_video_flattened(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.annotate_video), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.annotate_video( + input_uri="input_uri_value", + features=[video_intelligence.Feature.LABEL_DETECTION], + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + + assert args[0].input_uri == "input_uri_value" + + assert args[0].features == [video_intelligence.Feature.LABEL_DETECTION] + + +def test_annotate_video_flattened_error(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.annotate_video( + video_intelligence.AnnotateVideoRequest(), + input_uri="input_uri_value", + features=[video_intelligence.Feature.LABEL_DETECTION], + ) + + +@pytest.mark.asyncio +async def test_annotate_video_flattened_async(): + client = VideoIntelligenceServiceAsyncClient( + credentials=credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.annotate_video), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.annotate_video( + input_uri="input_uri_value", + features=[video_intelligence.Feature.LABEL_DETECTION], + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + + assert args[0].input_uri == "input_uri_value" + + assert args[0].features == [video_intelligence.Feature.LABEL_DETECTION] + + +@pytest.mark.asyncio +async def test_annotate_video_flattened_error_async(): + client = VideoIntelligenceServiceAsyncClient( + credentials=credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.annotate_video( + video_intelligence.AnnotateVideoRequest(), + input_uri="input_uri_value", + features=[video_intelligence.Feature.LABEL_DETECTION], + ) + + +def test_credentials_transport_error(): + # It is an error to provide credentials and a transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), transport=transport, + ) + + # It is an error to provide a credentials file and a transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = VideoIntelligenceServiceClient( + client_options={"credentials_file": "credentials.json"}, + transport=transport, + ) + + # It is an error to provide scopes and a transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = VideoIntelligenceServiceClient( + client_options={"scopes": ["1", "2"]}, transport=transport, + ) + + +def test_transport_instance(): + # A client may be instantiated with a custom transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + client = VideoIntelligenceServiceClient(transport=transport) + assert client.transport is transport + + +def test_transport_get_channel(): + # A client may be instantiated with a custom transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + channel = transport.grpc_channel + assert channel + + transport = transports.VideoIntelligenceServiceGrpcAsyncIOTransport( + credentials=credentials.AnonymousCredentials(), + ) + channel = transport.grpc_channel + assert channel + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.VideoIntelligenceServiceGrpcTransport, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + ], +) +def test_transport_adc(transport_class): + # Test default credentials are used if not provided. + with mock.patch.object(auth, "default") as adc: + adc.return_value = (credentials.AnonymousCredentials(), None) + transport_class() + adc.assert_called_once() + + +def test_transport_grpc_default(): + # A client should use the gRPC transport by default. + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + ) + assert isinstance( + client.transport, transports.VideoIntelligenceServiceGrpcTransport, + ) + + +def test_video_intelligence_service_base_transport_error(): + # Passing both a credentials object and credentials_file should raise an error + with pytest.raises(exceptions.DuplicateCredentialArgs): + transport = transports.VideoIntelligenceServiceTransport( + credentials=credentials.AnonymousCredentials(), + credentials_file="credentials.json", + ) + + +def test_video_intelligence_service_base_transport(): + # Instantiate the base transport. + with mock.patch( + "google.cloud.videointelligence_v1p1beta1.services.video_intelligence_service.transports.VideoIntelligenceServiceTransport.__init__" + ) as Transport: + Transport.return_value = None + transport = transports.VideoIntelligenceServiceTransport( + credentials=credentials.AnonymousCredentials(), + ) + + # Every method on the transport should just blindly + # raise NotImplementedError. + methods = ("annotate_video",) + for method in methods: + with pytest.raises(NotImplementedError): + getattr(transport, method)(request=object()) + + # Additionally, the LRO client (a property) should + # also raise NotImplementedError + with pytest.raises(NotImplementedError): + transport.operations_client + + +def test_video_intelligence_service_base_transport_with_credentials_file(): + # Instantiate the base transport with a credentials file + with mock.patch.object( + auth, "load_credentials_from_file" + ) as load_creds, mock.patch( + "google.cloud.videointelligence_v1p1beta1.services.video_intelligence_service.transports.VideoIntelligenceServiceTransport._prep_wrapped_messages" + ) as Transport: + Transport.return_value = None + load_creds.return_value = (credentials.AnonymousCredentials(), None) + transport = transports.VideoIntelligenceServiceTransport( + credentials_file="credentials.json", quota_project_id="octopus", + ) + load_creds.assert_called_once_with( + "credentials.json", + scopes=("https://www.googleapis.com/auth/cloud-platform",), + quota_project_id="octopus", + ) + + +def test_video_intelligence_service_base_transport_with_adc(): + # Test the default credentials are used if credentials and credentials_file are None. + with mock.patch.object(auth, "default") as adc, mock.patch( + "google.cloud.videointelligence_v1p1beta1.services.video_intelligence_service.transports.VideoIntelligenceServiceTransport._prep_wrapped_messages" + ) as Transport: + Transport.return_value = None + adc.return_value = (credentials.AnonymousCredentials(), None) + transport = transports.VideoIntelligenceServiceTransport() + adc.assert_called_once() + + +def test_video_intelligence_service_auth_adc(): + # If no credentials are provided, we should use ADC credentials. + with mock.patch.object(auth, "default") as adc: + adc.return_value = (credentials.AnonymousCredentials(), None) + VideoIntelligenceServiceClient() + adc.assert_called_once_with( + scopes=("https://www.googleapis.com/auth/cloud-platform",), + quota_project_id=None, + ) + + +def test_video_intelligence_service_transport_auth_adc(): + # If credentials and host are not provided, the transport class should use + # ADC credentials. + with mock.patch.object(auth, "default") as adc: + adc.return_value = (credentials.AnonymousCredentials(), None) + transports.VideoIntelligenceServiceGrpcTransport( + host="squid.clam.whelk", quota_project_id="octopus" + ) + adc.assert_called_once_with( + scopes=("https://www.googleapis.com/auth/cloud-platform",), + quota_project_id="octopus", + ) + + +def test_video_intelligence_service_host_no_port(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + client_options=client_options.ClientOptions( + api_endpoint="videointelligence.googleapis.com" + ), + ) + assert client.transport._host == "videointelligence.googleapis.com:443" + + +def test_video_intelligence_service_host_with_port(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + client_options=client_options.ClientOptions( + api_endpoint="videointelligence.googleapis.com:8000" + ), + ) + assert client.transport._host == "videointelligence.googleapis.com:8000" + + +def test_video_intelligence_service_grpc_transport_channel(): + channel = grpc.insecure_channel("http://localhost/") + + # Check that channel is used if provided. + transport = transports.VideoIntelligenceServiceGrpcTransport( + host="squid.clam.whelk", channel=channel, + ) + assert transport.grpc_channel == channel + assert transport._host == "squid.clam.whelk:443" + assert transport._ssl_channel_credentials == None + + +def test_video_intelligence_service_grpc_asyncio_transport_channel(): + channel = aio.insecure_channel("http://localhost/") + + # Check that channel is used if provided. + transport = transports.VideoIntelligenceServiceGrpcAsyncIOTransport( + host="squid.clam.whelk", channel=channel, + ) + assert transport.grpc_channel == channel + assert transport._host == "squid.clam.whelk:443" + assert transport._ssl_channel_credentials == None + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.VideoIntelligenceServiceGrpcTransport, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + ], +) +def test_video_intelligence_service_transport_channel_mtls_with_client_cert_source( + transport_class, +): + with mock.patch( + "grpc.ssl_channel_credentials", autospec=True + ) as grpc_ssl_channel_cred: + with mock.patch.object( + transport_class, "create_channel", autospec=True + ) as grpc_create_channel: + mock_ssl_cred = mock.Mock() + grpc_ssl_channel_cred.return_value = mock_ssl_cred + + mock_grpc_channel = mock.Mock() + grpc_create_channel.return_value = mock_grpc_channel + + cred = credentials.AnonymousCredentials() + with pytest.warns(DeprecationWarning): + with mock.patch.object(auth, "default") as adc: + adc.return_value = (cred, None) + transport = transport_class( + host="squid.clam.whelk", + api_mtls_endpoint="mtls.squid.clam.whelk", + client_cert_source=client_cert_source_callback, + ) + adc.assert_called_once() + + grpc_ssl_channel_cred.assert_called_once_with( + certificate_chain=b"cert bytes", private_key=b"key bytes" + ) + grpc_create_channel.assert_called_once_with( + "mtls.squid.clam.whelk:443", + credentials=cred, + credentials_file=None, + scopes=("https://www.googleapis.com/auth/cloud-platform",), + ssl_credentials=mock_ssl_cred, + quota_project_id=None, + ) + assert transport.grpc_channel == mock_grpc_channel + assert transport._ssl_channel_credentials == mock_ssl_cred + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.VideoIntelligenceServiceGrpcTransport, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + ], +) +def test_video_intelligence_service_transport_channel_mtls_with_adc(transport_class): + mock_ssl_cred = mock.Mock() + with mock.patch.multiple( + "google.auth.transport.grpc.SslCredentials", + __init__=mock.Mock(return_value=None), + ssl_credentials=mock.PropertyMock(return_value=mock_ssl_cred), + ): + with mock.patch.object( + transport_class, "create_channel", autospec=True + ) as grpc_create_channel: + mock_grpc_channel = mock.Mock() + grpc_create_channel.return_value = mock_grpc_channel + mock_cred = mock.Mock() + + with pytest.warns(DeprecationWarning): + transport = transport_class( + host="squid.clam.whelk", + credentials=mock_cred, + api_mtls_endpoint="mtls.squid.clam.whelk", + client_cert_source=None, + ) + + grpc_create_channel.assert_called_once_with( + "mtls.squid.clam.whelk:443", + credentials=mock_cred, + credentials_file=None, + scopes=("https://www.googleapis.com/auth/cloud-platform",), + ssl_credentials=mock_ssl_cred, + quota_project_id=None, + ) + assert transport.grpc_channel == mock_grpc_channel + + +def test_video_intelligence_service_grpc_lro_client(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), transport="grpc", + ) + transport = client.transport + + # Ensure that we have a api-core operations client. + assert isinstance(transport.operations_client, operations_v1.OperationsClient,) + + # Ensure that subsequent calls to the property send the exact same object. + assert transport.operations_client is transport.operations_client + + +def test_video_intelligence_service_grpc_lro_async_client(): + client = VideoIntelligenceServiceAsyncClient( + credentials=credentials.AnonymousCredentials(), transport="grpc_asyncio", + ) + transport = client.transport + + # Ensure that we have a api-core operations client. + assert isinstance(transport.operations_client, operations_v1.OperationsAsyncClient,) + + # Ensure that subsequent calls to the property send the exact same object. + assert transport.operations_client is transport.operations_client + + +def test_common_billing_account_path(): + billing_account = "squid" + + expected = "billingAccounts/{billing_account}".format( + billing_account=billing_account, + ) + actual = VideoIntelligenceServiceClient.common_billing_account_path(billing_account) + assert expected == actual + + +def test_parse_common_billing_account_path(): + expected = { + "billing_account": "clam", + } + path = VideoIntelligenceServiceClient.common_billing_account_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_billing_account_path(path) + assert expected == actual + + +def test_common_folder_path(): + folder = "whelk" + + expected = "folders/{folder}".format(folder=folder,) + actual = VideoIntelligenceServiceClient.common_folder_path(folder) + assert expected == actual + + +def test_parse_common_folder_path(): + expected = { + "folder": "octopus", + } + path = VideoIntelligenceServiceClient.common_folder_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_folder_path(path) + assert expected == actual + + +def test_common_organization_path(): + organization = "oyster" + + expected = "organizations/{organization}".format(organization=organization,) + actual = VideoIntelligenceServiceClient.common_organization_path(organization) + assert expected == actual + + +def test_parse_common_organization_path(): + expected = { + "organization": "nudibranch", + } + path = VideoIntelligenceServiceClient.common_organization_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_organization_path(path) + assert expected == actual + + +def test_common_project_path(): + project = "cuttlefish" + + expected = "projects/{project}".format(project=project,) + actual = VideoIntelligenceServiceClient.common_project_path(project) + assert expected == actual + + +def test_parse_common_project_path(): + expected = { + "project": "mussel", + } + path = VideoIntelligenceServiceClient.common_project_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_project_path(path) + assert expected == actual + + +def test_common_location_path(): + project = "winkle" + location = "nautilus" + + expected = "projects/{project}/locations/{location}".format( + project=project, location=location, + ) + actual = VideoIntelligenceServiceClient.common_location_path(project, location) + assert expected == actual + + +def test_parse_common_location_path(): + expected = { + "project": "scallop", + "location": "abalone", + } + path = VideoIntelligenceServiceClient.common_location_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_location_path(path) + assert expected == actual + + +def test_client_withDEFAULT_CLIENT_INFO(): + client_info = gapic_v1.client_info.ClientInfo() + + with mock.patch.object( + transports.VideoIntelligenceServiceTransport, "_prep_wrapped_messages" + ) as prep: + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), client_info=client_info, + ) + prep.assert_called_once_with(client_info) + + with mock.patch.object( + transports.VideoIntelligenceServiceTransport, "_prep_wrapped_messages" + ) as prep: + transport_class = VideoIntelligenceServiceClient.get_transport_class() + transport = transport_class( + credentials=credentials.AnonymousCredentials(), client_info=client_info, + ) + prep.assert_called_once_with(client_info) diff --git a/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p2beta1/__init__.py b/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p2beta1/__init__.py new file mode 100644 index 000000000000..8b137891791f --- /dev/null +++ b/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p2beta1/__init__.py @@ -0,0 +1 @@ + diff --git a/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p2beta1/test_video_intelligence_service.py b/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p2beta1/test_video_intelligence_service.py new file mode 100644 index 000000000000..0fa136c11650 --- /dev/null +++ b/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p2beta1/test_video_intelligence_service.py @@ -0,0 +1,1079 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import os +import mock + +import grpc +from grpc.experimental import aio +import math +import pytest +from proto.marshal.rules.dates import DurationRule, TimestampRule + +from google import auth +from google.api_core import client_options +from google.api_core import exceptions +from google.api_core import future +from google.api_core import gapic_v1 +from google.api_core import grpc_helpers +from google.api_core import grpc_helpers_async +from google.api_core import operation_async # type: ignore +from google.api_core import operations_v1 +from google.auth import credentials +from google.auth.exceptions import MutualTLSChannelError +from google.cloud.videointelligence_v1p2beta1.services.video_intelligence_service import ( + VideoIntelligenceServiceAsyncClient, +) +from google.cloud.videointelligence_v1p2beta1.services.video_intelligence_service import ( + VideoIntelligenceServiceClient, +) +from google.cloud.videointelligence_v1p2beta1.services.video_intelligence_service import ( + transports, +) +from google.cloud.videointelligence_v1p2beta1.types import video_intelligence +from google.longrunning import operations_pb2 +from google.oauth2 import service_account +from google.protobuf import duration_pb2 as duration # type: ignore + + +def client_cert_source_callback(): + return b"cert bytes", b"key bytes" + + +# If default endpoint is localhost, then default mtls endpoint will be the same. +# This method modifies the default endpoint so the client can produce a different +# mtls endpoint for endpoint testing purposes. +def modify_default_endpoint(client): + return ( + "foo.googleapis.com" + if ("localhost" in client.DEFAULT_ENDPOINT) + else client.DEFAULT_ENDPOINT + ) + + +def test__get_default_mtls_endpoint(): + api_endpoint = "example.googleapis.com" + api_mtls_endpoint = "example.mtls.googleapis.com" + sandbox_endpoint = "example.sandbox.googleapis.com" + sandbox_mtls_endpoint = "example.mtls.sandbox.googleapis.com" + non_googleapi = "api.example.com" + + assert VideoIntelligenceServiceClient._get_default_mtls_endpoint(None) is None + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(api_endpoint) + == api_mtls_endpoint + ) + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(api_mtls_endpoint) + == api_mtls_endpoint + ) + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(sandbox_endpoint) + == sandbox_mtls_endpoint + ) + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(sandbox_mtls_endpoint) + == sandbox_mtls_endpoint + ) + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(non_googleapi) + == non_googleapi + ) + + +@pytest.mark.parametrize( + "client_class", + [VideoIntelligenceServiceClient, VideoIntelligenceServiceAsyncClient], +) +def test_video_intelligence_service_client_from_service_account_file(client_class): + creds = credentials.AnonymousCredentials() + with mock.patch.object( + service_account.Credentials, "from_service_account_file" + ) as factory: + factory.return_value = creds + client = client_class.from_service_account_file("dummy/file/path.json") + assert client.transport._credentials == creds + + client = client_class.from_service_account_json("dummy/file/path.json") + assert client.transport._credentials == creds + + assert client.transport._host == "videointelligence.googleapis.com:443" + + +def test_video_intelligence_service_client_get_transport_class(): + transport = VideoIntelligenceServiceClient.get_transport_class() + assert transport == transports.VideoIntelligenceServiceGrpcTransport + + transport = VideoIntelligenceServiceClient.get_transport_class("grpc") + assert transport == transports.VideoIntelligenceServiceGrpcTransport + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + ), + ], +) +@mock.patch.object( + VideoIntelligenceServiceClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(VideoIntelligenceServiceClient), +) +@mock.patch.object( + VideoIntelligenceServiceAsyncClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(VideoIntelligenceServiceAsyncClient), +) +def test_video_intelligence_service_client_client_options( + client_class, transport_class, transport_name +): + # Check that if channel is provided we won't create a new one. + with mock.patch.object( + VideoIntelligenceServiceClient, "get_transport_class" + ) as gtc: + transport = transport_class(credentials=credentials.AnonymousCredentials()) + client = client_class(transport=transport) + gtc.assert_not_called() + + # Check that if channel is provided via str we will create a new one. + with mock.patch.object( + VideoIntelligenceServiceClient, "get_transport_class" + ) as gtc: + client = client_class(transport=transport_name) + gtc.assert_called() + + # Check the case api_endpoint is provided. + options = client_options.ClientOptions(api_endpoint="squid.clam.whelk") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host="squid.clam.whelk", + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is + # "never". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}): + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is + # "always". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}): + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_MTLS_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT has + # unsupported value. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "Unsupported"}): + with pytest.raises(MutualTLSChannelError): + client = client_class() + + # Check the case GOOGLE_API_USE_CLIENT_CERTIFICATE has unsupported value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "Unsupported"} + ): + with pytest.raises(ValueError): + client = client_class() + + # Check the case quota_project_id is provided + options = client_options.ClientOptions(quota_project_id="octopus") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id="octopus", + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name,use_client_cert_env", + [ + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + "true", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + "true", + ), + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + "false", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + "false", + ), + ], +) +@mock.patch.object( + VideoIntelligenceServiceClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(VideoIntelligenceServiceClient), +) +@mock.patch.object( + VideoIntelligenceServiceAsyncClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(VideoIntelligenceServiceAsyncClient), +) +@mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "auto"}) +def test_video_intelligence_service_client_mtls_env_auto( + client_class, transport_class, transport_name, use_client_cert_env +): + # This tests the endpoint autoswitch behavior. Endpoint is autoswitched to the default + # mtls endpoint, if GOOGLE_API_USE_CLIENT_CERTIFICATE is "true" and client cert exists. + + # Check the case client_cert_source is provided. Whether client cert is used depends on + # GOOGLE_API_USE_CLIENT_CERTIFICATE value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + options = client_options.ClientOptions( + client_cert_source=client_cert_source_callback + ) + with mock.patch.object(transport_class, "__init__") as patched: + ssl_channel_creds = mock.Mock() + with mock.patch( + "grpc.ssl_channel_credentials", return_value=ssl_channel_creds + ): + patched.return_value = None + client = client_class(client_options=options) + + if use_client_cert_env == "false": + expected_ssl_channel_creds = None + expected_host = client.DEFAULT_ENDPOINT + else: + expected_ssl_channel_creds = ssl_channel_creds + expected_host = client.DEFAULT_MTLS_ENDPOINT + + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=expected_host, + scopes=None, + ssl_channel_credentials=expected_ssl_channel_creds, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case ADC client cert is provided. Whether client cert is used depends on + # GOOGLE_API_USE_CLIENT_CERTIFICATE value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + with mock.patch.object(transport_class, "__init__") as patched: + with mock.patch( + "google.auth.transport.grpc.SslCredentials.__init__", return_value=None + ): + with mock.patch( + "google.auth.transport.grpc.SslCredentials.is_mtls", + new_callable=mock.PropertyMock, + ) as is_mtls_mock: + with mock.patch( + "google.auth.transport.grpc.SslCredentials.ssl_credentials", + new_callable=mock.PropertyMock, + ) as ssl_credentials_mock: + if use_client_cert_env == "false": + is_mtls_mock.return_value = False + ssl_credentials_mock.return_value = None + expected_host = client.DEFAULT_ENDPOINT + expected_ssl_channel_creds = None + else: + is_mtls_mock.return_value = True + ssl_credentials_mock.return_value = mock.Mock() + expected_host = client.DEFAULT_MTLS_ENDPOINT + expected_ssl_channel_creds = ( + ssl_credentials_mock.return_value + ) + + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=expected_host, + scopes=None, + ssl_channel_credentials=expected_ssl_channel_creds, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case client_cert_source and ADC client cert are not provided. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + with mock.patch.object(transport_class, "__init__") as patched: + with mock.patch( + "google.auth.transport.grpc.SslCredentials.__init__", return_value=None + ): + with mock.patch( + "google.auth.transport.grpc.SslCredentials.is_mtls", + new_callable=mock.PropertyMock, + ) as is_mtls_mock: + is_mtls_mock.return_value = False + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + ), + ], +) +def test_video_intelligence_service_client_client_options_scopes( + client_class, transport_class, transport_name +): + # Check the case scopes are provided. + options = client_options.ClientOptions(scopes=["1", "2"],) + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=["1", "2"], + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + ), + ], +) +def test_video_intelligence_service_client_client_options_credentials_file( + client_class, transport_class, transport_name +): + # Check the case credentials file is provided. + options = client_options.ClientOptions(credentials_file="credentials.json") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file="credentials.json", + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +def test_video_intelligence_service_client_client_options_from_dict(): + with mock.patch( + "google.cloud.videointelligence_v1p2beta1.services.video_intelligence_service.transports.VideoIntelligenceServiceGrpcTransport.__init__" + ) as grpc_transport: + grpc_transport.return_value = None + client = VideoIntelligenceServiceClient( + client_options={"api_endpoint": "squid.clam.whelk"} + ) + grpc_transport.assert_called_once_with( + credentials=None, + credentials_file=None, + host="squid.clam.whelk", + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +def test_annotate_video( + transport: str = "grpc", request_type=video_intelligence.AnnotateVideoRequest +): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.annotate_video), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/spam") + + response = client.annotate_video(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + + assert args[0] == video_intelligence.AnnotateVideoRequest() + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +def test_annotate_video_from_dict(): + test_annotate_video(request_type=dict) + + +@pytest.mark.asyncio +async def test_annotate_video_async( + transport: str = "grpc_asyncio", + request_type=video_intelligence.AnnotateVideoRequest, +): + client = VideoIntelligenceServiceAsyncClient( + credentials=credentials.AnonymousCredentials(), transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.annotate_video), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + + response = await client.annotate_video(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + + assert args[0] == video_intelligence.AnnotateVideoRequest() + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +@pytest.mark.asyncio +async def test_annotate_video_async_from_dict(): + await test_annotate_video_async(request_type=dict) + + +def test_annotate_video_flattened(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.annotate_video), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.annotate_video( + input_uri="input_uri_value", + features=[video_intelligence.Feature.LABEL_DETECTION], + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + + assert args[0].input_uri == "input_uri_value" + + assert args[0].features == [video_intelligence.Feature.LABEL_DETECTION] + + +def test_annotate_video_flattened_error(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.annotate_video( + video_intelligence.AnnotateVideoRequest(), + input_uri="input_uri_value", + features=[video_intelligence.Feature.LABEL_DETECTION], + ) + + +@pytest.mark.asyncio +async def test_annotate_video_flattened_async(): + client = VideoIntelligenceServiceAsyncClient( + credentials=credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.annotate_video), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.annotate_video( + input_uri="input_uri_value", + features=[video_intelligence.Feature.LABEL_DETECTION], + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + + assert args[0].input_uri == "input_uri_value" + + assert args[0].features == [video_intelligence.Feature.LABEL_DETECTION] + + +@pytest.mark.asyncio +async def test_annotate_video_flattened_error_async(): + client = VideoIntelligenceServiceAsyncClient( + credentials=credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.annotate_video( + video_intelligence.AnnotateVideoRequest(), + input_uri="input_uri_value", + features=[video_intelligence.Feature.LABEL_DETECTION], + ) + + +def test_credentials_transport_error(): + # It is an error to provide credentials and a transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), transport=transport, + ) + + # It is an error to provide a credentials file and a transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = VideoIntelligenceServiceClient( + client_options={"credentials_file": "credentials.json"}, + transport=transport, + ) + + # It is an error to provide scopes and a transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = VideoIntelligenceServiceClient( + client_options={"scopes": ["1", "2"]}, transport=transport, + ) + + +def test_transport_instance(): + # A client may be instantiated with a custom transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + client = VideoIntelligenceServiceClient(transport=transport) + assert client.transport is transport + + +def test_transport_get_channel(): + # A client may be instantiated with a custom transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + channel = transport.grpc_channel + assert channel + + transport = transports.VideoIntelligenceServiceGrpcAsyncIOTransport( + credentials=credentials.AnonymousCredentials(), + ) + channel = transport.grpc_channel + assert channel + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.VideoIntelligenceServiceGrpcTransport, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + ], +) +def test_transport_adc(transport_class): + # Test default credentials are used if not provided. + with mock.patch.object(auth, "default") as adc: + adc.return_value = (credentials.AnonymousCredentials(), None) + transport_class() + adc.assert_called_once() + + +def test_transport_grpc_default(): + # A client should use the gRPC transport by default. + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + ) + assert isinstance( + client.transport, transports.VideoIntelligenceServiceGrpcTransport, + ) + + +def test_video_intelligence_service_base_transport_error(): + # Passing both a credentials object and credentials_file should raise an error + with pytest.raises(exceptions.DuplicateCredentialArgs): + transport = transports.VideoIntelligenceServiceTransport( + credentials=credentials.AnonymousCredentials(), + credentials_file="credentials.json", + ) + + +def test_video_intelligence_service_base_transport(): + # Instantiate the base transport. + with mock.patch( + "google.cloud.videointelligence_v1p2beta1.services.video_intelligence_service.transports.VideoIntelligenceServiceTransport.__init__" + ) as Transport: + Transport.return_value = None + transport = transports.VideoIntelligenceServiceTransport( + credentials=credentials.AnonymousCredentials(), + ) + + # Every method on the transport should just blindly + # raise NotImplementedError. + methods = ("annotate_video",) + for method in methods: + with pytest.raises(NotImplementedError): + getattr(transport, method)(request=object()) + + # Additionally, the LRO client (a property) should + # also raise NotImplementedError + with pytest.raises(NotImplementedError): + transport.operations_client + + +def test_video_intelligence_service_base_transport_with_credentials_file(): + # Instantiate the base transport with a credentials file + with mock.patch.object( + auth, "load_credentials_from_file" + ) as load_creds, mock.patch( + "google.cloud.videointelligence_v1p2beta1.services.video_intelligence_service.transports.VideoIntelligenceServiceTransport._prep_wrapped_messages" + ) as Transport: + Transport.return_value = None + load_creds.return_value = (credentials.AnonymousCredentials(), None) + transport = transports.VideoIntelligenceServiceTransport( + credentials_file="credentials.json", quota_project_id="octopus", + ) + load_creds.assert_called_once_with( + "credentials.json", + scopes=("https://www.googleapis.com/auth/cloud-platform",), + quota_project_id="octopus", + ) + + +def test_video_intelligence_service_base_transport_with_adc(): + # Test the default credentials are used if credentials and credentials_file are None. + with mock.patch.object(auth, "default") as adc, mock.patch( + "google.cloud.videointelligence_v1p2beta1.services.video_intelligence_service.transports.VideoIntelligenceServiceTransport._prep_wrapped_messages" + ) as Transport: + Transport.return_value = None + adc.return_value = (credentials.AnonymousCredentials(), None) + transport = transports.VideoIntelligenceServiceTransport() + adc.assert_called_once() + + +def test_video_intelligence_service_auth_adc(): + # If no credentials are provided, we should use ADC credentials. + with mock.patch.object(auth, "default") as adc: + adc.return_value = (credentials.AnonymousCredentials(), None) + VideoIntelligenceServiceClient() + adc.assert_called_once_with( + scopes=("https://www.googleapis.com/auth/cloud-platform",), + quota_project_id=None, + ) + + +def test_video_intelligence_service_transport_auth_adc(): + # If credentials and host are not provided, the transport class should use + # ADC credentials. + with mock.patch.object(auth, "default") as adc: + adc.return_value = (credentials.AnonymousCredentials(), None) + transports.VideoIntelligenceServiceGrpcTransport( + host="squid.clam.whelk", quota_project_id="octopus" + ) + adc.assert_called_once_with( + scopes=("https://www.googleapis.com/auth/cloud-platform",), + quota_project_id="octopus", + ) + + +def test_video_intelligence_service_host_no_port(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + client_options=client_options.ClientOptions( + api_endpoint="videointelligence.googleapis.com" + ), + ) + assert client.transport._host == "videointelligence.googleapis.com:443" + + +def test_video_intelligence_service_host_with_port(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + client_options=client_options.ClientOptions( + api_endpoint="videointelligence.googleapis.com:8000" + ), + ) + assert client.transport._host == "videointelligence.googleapis.com:8000" + + +def test_video_intelligence_service_grpc_transport_channel(): + channel = grpc.insecure_channel("http://localhost/") + + # Check that channel is used if provided. + transport = transports.VideoIntelligenceServiceGrpcTransport( + host="squid.clam.whelk", channel=channel, + ) + assert transport.grpc_channel == channel + assert transport._host == "squid.clam.whelk:443" + assert transport._ssl_channel_credentials == None + + +def test_video_intelligence_service_grpc_asyncio_transport_channel(): + channel = aio.insecure_channel("http://localhost/") + + # Check that channel is used if provided. + transport = transports.VideoIntelligenceServiceGrpcAsyncIOTransport( + host="squid.clam.whelk", channel=channel, + ) + assert transport.grpc_channel == channel + assert transport._host == "squid.clam.whelk:443" + assert transport._ssl_channel_credentials == None + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.VideoIntelligenceServiceGrpcTransport, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + ], +) +def test_video_intelligence_service_transport_channel_mtls_with_client_cert_source( + transport_class, +): + with mock.patch( + "grpc.ssl_channel_credentials", autospec=True + ) as grpc_ssl_channel_cred: + with mock.patch.object( + transport_class, "create_channel", autospec=True + ) as grpc_create_channel: + mock_ssl_cred = mock.Mock() + grpc_ssl_channel_cred.return_value = mock_ssl_cred + + mock_grpc_channel = mock.Mock() + grpc_create_channel.return_value = mock_grpc_channel + + cred = credentials.AnonymousCredentials() + with pytest.warns(DeprecationWarning): + with mock.patch.object(auth, "default") as adc: + adc.return_value = (cred, None) + transport = transport_class( + host="squid.clam.whelk", + api_mtls_endpoint="mtls.squid.clam.whelk", + client_cert_source=client_cert_source_callback, + ) + adc.assert_called_once() + + grpc_ssl_channel_cred.assert_called_once_with( + certificate_chain=b"cert bytes", private_key=b"key bytes" + ) + grpc_create_channel.assert_called_once_with( + "mtls.squid.clam.whelk:443", + credentials=cred, + credentials_file=None, + scopes=("https://www.googleapis.com/auth/cloud-platform",), + ssl_credentials=mock_ssl_cred, + quota_project_id=None, + ) + assert transport.grpc_channel == mock_grpc_channel + assert transport._ssl_channel_credentials == mock_ssl_cred + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.VideoIntelligenceServiceGrpcTransport, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + ], +) +def test_video_intelligence_service_transport_channel_mtls_with_adc(transport_class): + mock_ssl_cred = mock.Mock() + with mock.patch.multiple( + "google.auth.transport.grpc.SslCredentials", + __init__=mock.Mock(return_value=None), + ssl_credentials=mock.PropertyMock(return_value=mock_ssl_cred), + ): + with mock.patch.object( + transport_class, "create_channel", autospec=True + ) as grpc_create_channel: + mock_grpc_channel = mock.Mock() + grpc_create_channel.return_value = mock_grpc_channel + mock_cred = mock.Mock() + + with pytest.warns(DeprecationWarning): + transport = transport_class( + host="squid.clam.whelk", + credentials=mock_cred, + api_mtls_endpoint="mtls.squid.clam.whelk", + client_cert_source=None, + ) + + grpc_create_channel.assert_called_once_with( + "mtls.squid.clam.whelk:443", + credentials=mock_cred, + credentials_file=None, + scopes=("https://www.googleapis.com/auth/cloud-platform",), + ssl_credentials=mock_ssl_cred, + quota_project_id=None, + ) + assert transport.grpc_channel == mock_grpc_channel + + +def test_video_intelligence_service_grpc_lro_client(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), transport="grpc", + ) + transport = client.transport + + # Ensure that we have a api-core operations client. + assert isinstance(transport.operations_client, operations_v1.OperationsClient,) + + # Ensure that subsequent calls to the property send the exact same object. + assert transport.operations_client is transport.operations_client + + +def test_video_intelligence_service_grpc_lro_async_client(): + client = VideoIntelligenceServiceAsyncClient( + credentials=credentials.AnonymousCredentials(), transport="grpc_asyncio", + ) + transport = client.transport + + # Ensure that we have a api-core operations client. + assert isinstance(transport.operations_client, operations_v1.OperationsAsyncClient,) + + # Ensure that subsequent calls to the property send the exact same object. + assert transport.operations_client is transport.operations_client + + +def test_common_billing_account_path(): + billing_account = "squid" + + expected = "billingAccounts/{billing_account}".format( + billing_account=billing_account, + ) + actual = VideoIntelligenceServiceClient.common_billing_account_path(billing_account) + assert expected == actual + + +def test_parse_common_billing_account_path(): + expected = { + "billing_account": "clam", + } + path = VideoIntelligenceServiceClient.common_billing_account_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_billing_account_path(path) + assert expected == actual + + +def test_common_folder_path(): + folder = "whelk" + + expected = "folders/{folder}".format(folder=folder,) + actual = VideoIntelligenceServiceClient.common_folder_path(folder) + assert expected == actual + + +def test_parse_common_folder_path(): + expected = { + "folder": "octopus", + } + path = VideoIntelligenceServiceClient.common_folder_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_folder_path(path) + assert expected == actual + + +def test_common_organization_path(): + organization = "oyster" + + expected = "organizations/{organization}".format(organization=organization,) + actual = VideoIntelligenceServiceClient.common_organization_path(organization) + assert expected == actual + + +def test_parse_common_organization_path(): + expected = { + "organization": "nudibranch", + } + path = VideoIntelligenceServiceClient.common_organization_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_organization_path(path) + assert expected == actual + + +def test_common_project_path(): + project = "cuttlefish" + + expected = "projects/{project}".format(project=project,) + actual = VideoIntelligenceServiceClient.common_project_path(project) + assert expected == actual + + +def test_parse_common_project_path(): + expected = { + "project": "mussel", + } + path = VideoIntelligenceServiceClient.common_project_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_project_path(path) + assert expected == actual + + +def test_common_location_path(): + project = "winkle" + location = "nautilus" + + expected = "projects/{project}/locations/{location}".format( + project=project, location=location, + ) + actual = VideoIntelligenceServiceClient.common_location_path(project, location) + assert expected == actual + + +def test_parse_common_location_path(): + expected = { + "project": "scallop", + "location": "abalone", + } + path = VideoIntelligenceServiceClient.common_location_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_location_path(path) + assert expected == actual + + +def test_client_withDEFAULT_CLIENT_INFO(): + client_info = gapic_v1.client_info.ClientInfo() + + with mock.patch.object( + transports.VideoIntelligenceServiceTransport, "_prep_wrapped_messages" + ) as prep: + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), client_info=client_info, + ) + prep.assert_called_once_with(client_info) + + with mock.patch.object( + transports.VideoIntelligenceServiceTransport, "_prep_wrapped_messages" + ) as prep: + transport_class = VideoIntelligenceServiceClient.get_transport_class() + transport = transport_class( + credentials=credentials.AnonymousCredentials(), client_info=client_info, + ) + prep.assert_called_once_with(client_info) diff --git a/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p3beta1/__init__.py b/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p3beta1/__init__.py new file mode 100644 index 000000000000..8b137891791f --- /dev/null +++ b/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p3beta1/__init__.py @@ -0,0 +1 @@ + diff --git a/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p3beta1/test_streaming_video_intelligence_service.py b/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p3beta1/test_streaming_video_intelligence_service.py new file mode 100644 index 000000000000..5e67bd20a21b --- /dev/null +++ b/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p3beta1/test_streaming_video_intelligence_service.py @@ -0,0 +1,996 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import os +import mock + +import grpc +from grpc.experimental import aio +import math +import pytest +from proto.marshal.rules.dates import DurationRule, TimestampRule + +from google import auth +from google.api_core import client_options +from google.api_core import exceptions +from google.api_core import gapic_v1 +from google.api_core import grpc_helpers +from google.api_core import grpc_helpers_async +from google.auth import credentials +from google.auth.exceptions import MutualTLSChannelError +from google.cloud.videointelligence_v1p3beta1.services.streaming_video_intelligence_service import ( + StreamingVideoIntelligenceServiceAsyncClient, +) +from google.cloud.videointelligence_v1p3beta1.services.streaming_video_intelligence_service import ( + StreamingVideoIntelligenceServiceClient, +) +from google.cloud.videointelligence_v1p3beta1.services.streaming_video_intelligence_service import ( + transports, +) +from google.cloud.videointelligence_v1p3beta1.types import video_intelligence +from google.oauth2 import service_account +from google.rpc import status_pb2 as status # type: ignore + + +def client_cert_source_callback(): + return b"cert bytes", b"key bytes" + + +# If default endpoint is localhost, then default mtls endpoint will be the same. +# This method modifies the default endpoint so the client can produce a different +# mtls endpoint for endpoint testing purposes. +def modify_default_endpoint(client): + return ( + "foo.googleapis.com" + if ("localhost" in client.DEFAULT_ENDPOINT) + else client.DEFAULT_ENDPOINT + ) + + +def test__get_default_mtls_endpoint(): + api_endpoint = "example.googleapis.com" + api_mtls_endpoint = "example.mtls.googleapis.com" + sandbox_endpoint = "example.sandbox.googleapis.com" + sandbox_mtls_endpoint = "example.mtls.sandbox.googleapis.com" + non_googleapi = "api.example.com" + + assert ( + StreamingVideoIntelligenceServiceClient._get_default_mtls_endpoint(None) is None + ) + assert ( + StreamingVideoIntelligenceServiceClient._get_default_mtls_endpoint(api_endpoint) + == api_mtls_endpoint + ) + assert ( + StreamingVideoIntelligenceServiceClient._get_default_mtls_endpoint( + api_mtls_endpoint + ) + == api_mtls_endpoint + ) + assert ( + StreamingVideoIntelligenceServiceClient._get_default_mtls_endpoint( + sandbox_endpoint + ) + == sandbox_mtls_endpoint + ) + assert ( + StreamingVideoIntelligenceServiceClient._get_default_mtls_endpoint( + sandbox_mtls_endpoint + ) + == sandbox_mtls_endpoint + ) + assert ( + StreamingVideoIntelligenceServiceClient._get_default_mtls_endpoint( + non_googleapi + ) + == non_googleapi + ) + + +@pytest.mark.parametrize( + "client_class", + [ + StreamingVideoIntelligenceServiceClient, + StreamingVideoIntelligenceServiceAsyncClient, + ], +) +def test_streaming_video_intelligence_service_client_from_service_account_file( + client_class, +): + creds = credentials.AnonymousCredentials() + with mock.patch.object( + service_account.Credentials, "from_service_account_file" + ) as factory: + factory.return_value = creds + client = client_class.from_service_account_file("dummy/file/path.json") + assert client.transport._credentials == creds + + client = client_class.from_service_account_json("dummy/file/path.json") + assert client.transport._credentials == creds + + assert client.transport._host == "videointelligence.googleapis.com:443" + + +def test_streaming_video_intelligence_service_client_get_transport_class(): + transport = StreamingVideoIntelligenceServiceClient.get_transport_class() + assert transport == transports.StreamingVideoIntelligenceServiceGrpcTransport + + transport = StreamingVideoIntelligenceServiceClient.get_transport_class("grpc") + assert transport == transports.StreamingVideoIntelligenceServiceGrpcTransport + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + ( + StreamingVideoIntelligenceServiceClient, + transports.StreamingVideoIntelligenceServiceGrpcTransport, + "grpc", + ), + ( + StreamingVideoIntelligenceServiceAsyncClient, + transports.StreamingVideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + ), + ], +) +@mock.patch.object( + StreamingVideoIntelligenceServiceClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(StreamingVideoIntelligenceServiceClient), +) +@mock.patch.object( + StreamingVideoIntelligenceServiceAsyncClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(StreamingVideoIntelligenceServiceAsyncClient), +) +def test_streaming_video_intelligence_service_client_client_options( + client_class, transport_class, transport_name +): + # Check that if channel is provided we won't create a new one. + with mock.patch.object( + StreamingVideoIntelligenceServiceClient, "get_transport_class" + ) as gtc: + transport = transport_class(credentials=credentials.AnonymousCredentials()) + client = client_class(transport=transport) + gtc.assert_not_called() + + # Check that if channel is provided via str we will create a new one. + with mock.patch.object( + StreamingVideoIntelligenceServiceClient, "get_transport_class" + ) as gtc: + client = client_class(transport=transport_name) + gtc.assert_called() + + # Check the case api_endpoint is provided. + options = client_options.ClientOptions(api_endpoint="squid.clam.whelk") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host="squid.clam.whelk", + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is + # "never". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}): + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is + # "always". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}): + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_MTLS_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT has + # unsupported value. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "Unsupported"}): + with pytest.raises(MutualTLSChannelError): + client = client_class() + + # Check the case GOOGLE_API_USE_CLIENT_CERTIFICATE has unsupported value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "Unsupported"} + ): + with pytest.raises(ValueError): + client = client_class() + + # Check the case quota_project_id is provided + options = client_options.ClientOptions(quota_project_id="octopus") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id="octopus", + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name,use_client_cert_env", + [ + ( + StreamingVideoIntelligenceServiceClient, + transports.StreamingVideoIntelligenceServiceGrpcTransport, + "grpc", + "true", + ), + ( + StreamingVideoIntelligenceServiceAsyncClient, + transports.StreamingVideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + "true", + ), + ( + StreamingVideoIntelligenceServiceClient, + transports.StreamingVideoIntelligenceServiceGrpcTransport, + "grpc", + "false", + ), + ( + StreamingVideoIntelligenceServiceAsyncClient, + transports.StreamingVideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + "false", + ), + ], +) +@mock.patch.object( + StreamingVideoIntelligenceServiceClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(StreamingVideoIntelligenceServiceClient), +) +@mock.patch.object( + StreamingVideoIntelligenceServiceAsyncClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(StreamingVideoIntelligenceServiceAsyncClient), +) +@mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "auto"}) +def test_streaming_video_intelligence_service_client_mtls_env_auto( + client_class, transport_class, transport_name, use_client_cert_env +): + # This tests the endpoint autoswitch behavior. Endpoint is autoswitched to the default + # mtls endpoint, if GOOGLE_API_USE_CLIENT_CERTIFICATE is "true" and client cert exists. + + # Check the case client_cert_source is provided. Whether client cert is used depends on + # GOOGLE_API_USE_CLIENT_CERTIFICATE value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + options = client_options.ClientOptions( + client_cert_source=client_cert_source_callback + ) + with mock.patch.object(transport_class, "__init__") as patched: + ssl_channel_creds = mock.Mock() + with mock.patch( + "grpc.ssl_channel_credentials", return_value=ssl_channel_creds + ): + patched.return_value = None + client = client_class(client_options=options) + + if use_client_cert_env == "false": + expected_ssl_channel_creds = None + expected_host = client.DEFAULT_ENDPOINT + else: + expected_ssl_channel_creds = ssl_channel_creds + expected_host = client.DEFAULT_MTLS_ENDPOINT + + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=expected_host, + scopes=None, + ssl_channel_credentials=expected_ssl_channel_creds, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case ADC client cert is provided. Whether client cert is used depends on + # GOOGLE_API_USE_CLIENT_CERTIFICATE value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + with mock.patch.object(transport_class, "__init__") as patched: + with mock.patch( + "google.auth.transport.grpc.SslCredentials.__init__", return_value=None + ): + with mock.patch( + "google.auth.transport.grpc.SslCredentials.is_mtls", + new_callable=mock.PropertyMock, + ) as is_mtls_mock: + with mock.patch( + "google.auth.transport.grpc.SslCredentials.ssl_credentials", + new_callable=mock.PropertyMock, + ) as ssl_credentials_mock: + if use_client_cert_env == "false": + is_mtls_mock.return_value = False + ssl_credentials_mock.return_value = None + expected_host = client.DEFAULT_ENDPOINT + expected_ssl_channel_creds = None + else: + is_mtls_mock.return_value = True + ssl_credentials_mock.return_value = mock.Mock() + expected_host = client.DEFAULT_MTLS_ENDPOINT + expected_ssl_channel_creds = ( + ssl_credentials_mock.return_value + ) + + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=expected_host, + scopes=None, + ssl_channel_credentials=expected_ssl_channel_creds, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case client_cert_source and ADC client cert are not provided. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + with mock.patch.object(transport_class, "__init__") as patched: + with mock.patch( + "google.auth.transport.grpc.SslCredentials.__init__", return_value=None + ): + with mock.patch( + "google.auth.transport.grpc.SslCredentials.is_mtls", + new_callable=mock.PropertyMock, + ) as is_mtls_mock: + is_mtls_mock.return_value = False + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + ( + StreamingVideoIntelligenceServiceClient, + transports.StreamingVideoIntelligenceServiceGrpcTransport, + "grpc", + ), + ( + StreamingVideoIntelligenceServiceAsyncClient, + transports.StreamingVideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + ), + ], +) +def test_streaming_video_intelligence_service_client_client_options_scopes( + client_class, transport_class, transport_name +): + # Check the case scopes are provided. + options = client_options.ClientOptions(scopes=["1", "2"],) + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=["1", "2"], + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + ( + StreamingVideoIntelligenceServiceClient, + transports.StreamingVideoIntelligenceServiceGrpcTransport, + "grpc", + ), + ( + StreamingVideoIntelligenceServiceAsyncClient, + transports.StreamingVideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + ), + ], +) +def test_streaming_video_intelligence_service_client_client_options_credentials_file( + client_class, transport_class, transport_name +): + # Check the case credentials file is provided. + options = client_options.ClientOptions(credentials_file="credentials.json") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file="credentials.json", + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +def test_streaming_video_intelligence_service_client_client_options_from_dict(): + with mock.patch( + "google.cloud.videointelligence_v1p3beta1.services.streaming_video_intelligence_service.transports.StreamingVideoIntelligenceServiceGrpcTransport.__init__" + ) as grpc_transport: + grpc_transport.return_value = None + client = StreamingVideoIntelligenceServiceClient( + client_options={"api_endpoint": "squid.clam.whelk"} + ) + grpc_transport.assert_called_once_with( + credentials=None, + credentials_file=None, + host="squid.clam.whelk", + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +def test_streaming_annotate_video( + transport: str = "grpc", + request_type=video_intelligence.StreamingAnnotateVideoRequest, +): + client = StreamingVideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + requests = [request] + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.streaming_annotate_video), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = iter([video_intelligence.StreamingAnnotateVideoResponse()]) + + response = client.streaming_annotate_video(iter(requests)) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + + assert next(args[0]) == request + + # Establish that the response is the type that we expect. + for message in response: + assert isinstance(message, video_intelligence.StreamingAnnotateVideoResponse) + + +def test_streaming_annotate_video_from_dict(): + test_streaming_annotate_video(request_type=dict) + + +@pytest.mark.asyncio +async def test_streaming_annotate_video_async( + transport: str = "grpc_asyncio", + request_type=video_intelligence.StreamingAnnotateVideoRequest, +): + client = StreamingVideoIntelligenceServiceAsyncClient( + credentials=credentials.AnonymousCredentials(), transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + requests = [request] + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object( + type(client.transport.streaming_annotate_video), "__call__" + ) as call: + # Designate an appropriate return value for the call. + call.return_value = mock.Mock(aio.StreamStreamCall, autospec=True) + call.return_value.read = mock.AsyncMock( + side_effect=[video_intelligence.StreamingAnnotateVideoResponse()] + ) + + response = await client.streaming_annotate_video(iter(requests)) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + + assert next(args[0]) == request + + # Establish that the response is the type that we expect. + message = await response.read() + assert isinstance(message, video_intelligence.StreamingAnnotateVideoResponse) + + +@pytest.mark.asyncio +async def test_streaming_annotate_video_async_from_dict(): + await test_streaming_annotate_video_async(request_type=dict) + + +def test_credentials_transport_error(): + # It is an error to provide credentials and a transport instance. + transport = transports.StreamingVideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = StreamingVideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), transport=transport, + ) + + # It is an error to provide a credentials file and a transport instance. + transport = transports.StreamingVideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = StreamingVideoIntelligenceServiceClient( + client_options={"credentials_file": "credentials.json"}, + transport=transport, + ) + + # It is an error to provide scopes and a transport instance. + transport = transports.StreamingVideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = StreamingVideoIntelligenceServiceClient( + client_options={"scopes": ["1", "2"]}, transport=transport, + ) + + +def test_transport_instance(): + # A client may be instantiated with a custom transport instance. + transport = transports.StreamingVideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + client = StreamingVideoIntelligenceServiceClient(transport=transport) + assert client.transport is transport + + +def test_transport_get_channel(): + # A client may be instantiated with a custom transport instance. + transport = transports.StreamingVideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + channel = transport.grpc_channel + assert channel + + transport = transports.StreamingVideoIntelligenceServiceGrpcAsyncIOTransport( + credentials=credentials.AnonymousCredentials(), + ) + channel = transport.grpc_channel + assert channel + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.StreamingVideoIntelligenceServiceGrpcTransport, + transports.StreamingVideoIntelligenceServiceGrpcAsyncIOTransport, + ], +) +def test_transport_adc(transport_class): + # Test default credentials are used if not provided. + with mock.patch.object(auth, "default") as adc: + adc.return_value = (credentials.AnonymousCredentials(), None) + transport_class() + adc.assert_called_once() + + +def test_transport_grpc_default(): + # A client should use the gRPC transport by default. + client = StreamingVideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + ) + assert isinstance( + client.transport, transports.StreamingVideoIntelligenceServiceGrpcTransport, + ) + + +def test_streaming_video_intelligence_service_base_transport_error(): + # Passing both a credentials object and credentials_file should raise an error + with pytest.raises(exceptions.DuplicateCredentialArgs): + transport = transports.StreamingVideoIntelligenceServiceTransport( + credentials=credentials.AnonymousCredentials(), + credentials_file="credentials.json", + ) + + +def test_streaming_video_intelligence_service_base_transport(): + # Instantiate the base transport. + with mock.patch( + "google.cloud.videointelligence_v1p3beta1.services.streaming_video_intelligence_service.transports.StreamingVideoIntelligenceServiceTransport.__init__" + ) as Transport: + Transport.return_value = None + transport = transports.StreamingVideoIntelligenceServiceTransport( + credentials=credentials.AnonymousCredentials(), + ) + + # Every method on the transport should just blindly + # raise NotImplementedError. + methods = ("streaming_annotate_video",) + for method in methods: + with pytest.raises(NotImplementedError): + getattr(transport, method)(request=object()) + + +def test_streaming_video_intelligence_service_base_transport_with_credentials_file(): + # Instantiate the base transport with a credentials file + with mock.patch.object( + auth, "load_credentials_from_file" + ) as load_creds, mock.patch( + "google.cloud.videointelligence_v1p3beta1.services.streaming_video_intelligence_service.transports.StreamingVideoIntelligenceServiceTransport._prep_wrapped_messages" + ) as Transport: + Transport.return_value = None + load_creds.return_value = (credentials.AnonymousCredentials(), None) + transport = transports.StreamingVideoIntelligenceServiceTransport( + credentials_file="credentials.json", quota_project_id="octopus", + ) + load_creds.assert_called_once_with( + "credentials.json", + scopes=("https://www.googleapis.com/auth/cloud-platform",), + quota_project_id="octopus", + ) + + +def test_streaming_video_intelligence_service_base_transport_with_adc(): + # Test the default credentials are used if credentials and credentials_file are None. + with mock.patch.object(auth, "default") as adc, mock.patch( + "google.cloud.videointelligence_v1p3beta1.services.streaming_video_intelligence_service.transports.StreamingVideoIntelligenceServiceTransport._prep_wrapped_messages" + ) as Transport: + Transport.return_value = None + adc.return_value = (credentials.AnonymousCredentials(), None) + transport = transports.StreamingVideoIntelligenceServiceTransport() + adc.assert_called_once() + + +def test_streaming_video_intelligence_service_auth_adc(): + # If no credentials are provided, we should use ADC credentials. + with mock.patch.object(auth, "default") as adc: + adc.return_value = (credentials.AnonymousCredentials(), None) + StreamingVideoIntelligenceServiceClient() + adc.assert_called_once_with( + scopes=("https://www.googleapis.com/auth/cloud-platform",), + quota_project_id=None, + ) + + +def test_streaming_video_intelligence_service_transport_auth_adc(): + # If credentials and host are not provided, the transport class should use + # ADC credentials. + with mock.patch.object(auth, "default") as adc: + adc.return_value = (credentials.AnonymousCredentials(), None) + transports.StreamingVideoIntelligenceServiceGrpcTransport( + host="squid.clam.whelk", quota_project_id="octopus" + ) + adc.assert_called_once_with( + scopes=("https://www.googleapis.com/auth/cloud-platform",), + quota_project_id="octopus", + ) + + +def test_streaming_video_intelligence_service_host_no_port(): + client = StreamingVideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + client_options=client_options.ClientOptions( + api_endpoint="videointelligence.googleapis.com" + ), + ) + assert client.transport._host == "videointelligence.googleapis.com:443" + + +def test_streaming_video_intelligence_service_host_with_port(): + client = StreamingVideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + client_options=client_options.ClientOptions( + api_endpoint="videointelligence.googleapis.com:8000" + ), + ) + assert client.transport._host == "videointelligence.googleapis.com:8000" + + +def test_streaming_video_intelligence_service_grpc_transport_channel(): + channel = grpc.insecure_channel("http://localhost/") + + # Check that channel is used if provided. + transport = transports.StreamingVideoIntelligenceServiceGrpcTransport( + host="squid.clam.whelk", channel=channel, + ) + assert transport.grpc_channel == channel + assert transport._host == "squid.clam.whelk:443" + assert transport._ssl_channel_credentials == None + + +def test_streaming_video_intelligence_service_grpc_asyncio_transport_channel(): + channel = aio.insecure_channel("http://localhost/") + + # Check that channel is used if provided. + transport = transports.StreamingVideoIntelligenceServiceGrpcAsyncIOTransport( + host="squid.clam.whelk", channel=channel, + ) + assert transport.grpc_channel == channel + assert transport._host == "squid.clam.whelk:443" + assert transport._ssl_channel_credentials == None + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.StreamingVideoIntelligenceServiceGrpcTransport, + transports.StreamingVideoIntelligenceServiceGrpcAsyncIOTransport, + ], +) +def test_streaming_video_intelligence_service_transport_channel_mtls_with_client_cert_source( + transport_class, +): + with mock.patch( + "grpc.ssl_channel_credentials", autospec=True + ) as grpc_ssl_channel_cred: + with mock.patch.object( + transport_class, "create_channel", autospec=True + ) as grpc_create_channel: + mock_ssl_cred = mock.Mock() + grpc_ssl_channel_cred.return_value = mock_ssl_cred + + mock_grpc_channel = mock.Mock() + grpc_create_channel.return_value = mock_grpc_channel + + cred = credentials.AnonymousCredentials() + with pytest.warns(DeprecationWarning): + with mock.patch.object(auth, "default") as adc: + adc.return_value = (cred, None) + transport = transport_class( + host="squid.clam.whelk", + api_mtls_endpoint="mtls.squid.clam.whelk", + client_cert_source=client_cert_source_callback, + ) + adc.assert_called_once() + + grpc_ssl_channel_cred.assert_called_once_with( + certificate_chain=b"cert bytes", private_key=b"key bytes" + ) + grpc_create_channel.assert_called_once_with( + "mtls.squid.clam.whelk:443", + credentials=cred, + credentials_file=None, + scopes=("https://www.googleapis.com/auth/cloud-platform",), + ssl_credentials=mock_ssl_cred, + quota_project_id=None, + ) + assert transport.grpc_channel == mock_grpc_channel + assert transport._ssl_channel_credentials == mock_ssl_cred + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.StreamingVideoIntelligenceServiceGrpcTransport, + transports.StreamingVideoIntelligenceServiceGrpcAsyncIOTransport, + ], +) +def test_streaming_video_intelligence_service_transport_channel_mtls_with_adc( + transport_class, +): + mock_ssl_cred = mock.Mock() + with mock.patch.multiple( + "google.auth.transport.grpc.SslCredentials", + __init__=mock.Mock(return_value=None), + ssl_credentials=mock.PropertyMock(return_value=mock_ssl_cred), + ): + with mock.patch.object( + transport_class, "create_channel", autospec=True + ) as grpc_create_channel: + mock_grpc_channel = mock.Mock() + grpc_create_channel.return_value = mock_grpc_channel + mock_cred = mock.Mock() + + with pytest.warns(DeprecationWarning): + transport = transport_class( + host="squid.clam.whelk", + credentials=mock_cred, + api_mtls_endpoint="mtls.squid.clam.whelk", + client_cert_source=None, + ) + + grpc_create_channel.assert_called_once_with( + "mtls.squid.clam.whelk:443", + credentials=mock_cred, + credentials_file=None, + scopes=("https://www.googleapis.com/auth/cloud-platform",), + ssl_credentials=mock_ssl_cred, + quota_project_id=None, + ) + assert transport.grpc_channel == mock_grpc_channel + + +def test_common_billing_account_path(): + billing_account = "squid" + + expected = "billingAccounts/{billing_account}".format( + billing_account=billing_account, + ) + actual = StreamingVideoIntelligenceServiceClient.common_billing_account_path( + billing_account + ) + assert expected == actual + + +def test_parse_common_billing_account_path(): + expected = { + "billing_account": "clam", + } + path = StreamingVideoIntelligenceServiceClient.common_billing_account_path( + **expected + ) + + # Check that the path construction is reversible. + actual = StreamingVideoIntelligenceServiceClient.parse_common_billing_account_path( + path + ) + assert expected == actual + + +def test_common_folder_path(): + folder = "whelk" + + expected = "folders/{folder}".format(folder=folder,) + actual = StreamingVideoIntelligenceServiceClient.common_folder_path(folder) + assert expected == actual + + +def test_parse_common_folder_path(): + expected = { + "folder": "octopus", + } + path = StreamingVideoIntelligenceServiceClient.common_folder_path(**expected) + + # Check that the path construction is reversible. + actual = StreamingVideoIntelligenceServiceClient.parse_common_folder_path(path) + assert expected == actual + + +def test_common_organization_path(): + organization = "oyster" + + expected = "organizations/{organization}".format(organization=organization,) + actual = StreamingVideoIntelligenceServiceClient.common_organization_path( + organization + ) + assert expected == actual + + +def test_parse_common_organization_path(): + expected = { + "organization": "nudibranch", + } + path = StreamingVideoIntelligenceServiceClient.common_organization_path(**expected) + + # Check that the path construction is reversible. + actual = StreamingVideoIntelligenceServiceClient.parse_common_organization_path( + path + ) + assert expected == actual + + +def test_common_project_path(): + project = "cuttlefish" + + expected = "projects/{project}".format(project=project,) + actual = StreamingVideoIntelligenceServiceClient.common_project_path(project) + assert expected == actual + + +def test_parse_common_project_path(): + expected = { + "project": "mussel", + } + path = StreamingVideoIntelligenceServiceClient.common_project_path(**expected) + + # Check that the path construction is reversible. + actual = StreamingVideoIntelligenceServiceClient.parse_common_project_path(path) + assert expected == actual + + +def test_common_location_path(): + project = "winkle" + location = "nautilus" + + expected = "projects/{project}/locations/{location}".format( + project=project, location=location, + ) + actual = StreamingVideoIntelligenceServiceClient.common_location_path( + project, location + ) + assert expected == actual + + +def test_parse_common_location_path(): + expected = { + "project": "scallop", + "location": "abalone", + } + path = StreamingVideoIntelligenceServiceClient.common_location_path(**expected) + + # Check that the path construction is reversible. + actual = StreamingVideoIntelligenceServiceClient.parse_common_location_path(path) + assert expected == actual + + +def test_client_withDEFAULT_CLIENT_INFO(): + client_info = gapic_v1.client_info.ClientInfo() + + with mock.patch.object( + transports.StreamingVideoIntelligenceServiceTransport, "_prep_wrapped_messages" + ) as prep: + client = StreamingVideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), client_info=client_info, + ) + prep.assert_called_once_with(client_info) + + with mock.patch.object( + transports.StreamingVideoIntelligenceServiceTransport, "_prep_wrapped_messages" + ) as prep: + transport_class = StreamingVideoIntelligenceServiceClient.get_transport_class() + transport = transport_class( + credentials=credentials.AnonymousCredentials(), client_info=client_info, + ) + prep.assert_called_once_with(client_info) diff --git a/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p3beta1/test_video_intelligence_service.py b/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p3beta1/test_video_intelligence_service.py new file mode 100644 index 000000000000..617acdd27788 --- /dev/null +++ b/packages/google-cloud-videointelligence/tests/unit/gapic/videointelligence_v1p3beta1/test_video_intelligence_service.py @@ -0,0 +1,1079 @@ +# -*- coding: utf-8 -*- + +# Copyright 2020 Google LLC +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# + +import os +import mock + +import grpc +from grpc.experimental import aio +import math +import pytest +from proto.marshal.rules.dates import DurationRule, TimestampRule + +from google import auth +from google.api_core import client_options +from google.api_core import exceptions +from google.api_core import future +from google.api_core import gapic_v1 +from google.api_core import grpc_helpers +from google.api_core import grpc_helpers_async +from google.api_core import operation_async # type: ignore +from google.api_core import operations_v1 +from google.auth import credentials +from google.auth.exceptions import MutualTLSChannelError +from google.cloud.videointelligence_v1p3beta1.services.video_intelligence_service import ( + VideoIntelligenceServiceAsyncClient, +) +from google.cloud.videointelligence_v1p3beta1.services.video_intelligence_service import ( + VideoIntelligenceServiceClient, +) +from google.cloud.videointelligence_v1p3beta1.services.video_intelligence_service import ( + transports, +) +from google.cloud.videointelligence_v1p3beta1.types import video_intelligence +from google.longrunning import operations_pb2 +from google.oauth2 import service_account +from google.protobuf import duration_pb2 as duration # type: ignore + + +def client_cert_source_callback(): + return b"cert bytes", b"key bytes" + + +# If default endpoint is localhost, then default mtls endpoint will be the same. +# This method modifies the default endpoint so the client can produce a different +# mtls endpoint for endpoint testing purposes. +def modify_default_endpoint(client): + return ( + "foo.googleapis.com" + if ("localhost" in client.DEFAULT_ENDPOINT) + else client.DEFAULT_ENDPOINT + ) + + +def test__get_default_mtls_endpoint(): + api_endpoint = "example.googleapis.com" + api_mtls_endpoint = "example.mtls.googleapis.com" + sandbox_endpoint = "example.sandbox.googleapis.com" + sandbox_mtls_endpoint = "example.mtls.sandbox.googleapis.com" + non_googleapi = "api.example.com" + + assert VideoIntelligenceServiceClient._get_default_mtls_endpoint(None) is None + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(api_endpoint) + == api_mtls_endpoint + ) + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(api_mtls_endpoint) + == api_mtls_endpoint + ) + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(sandbox_endpoint) + == sandbox_mtls_endpoint + ) + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(sandbox_mtls_endpoint) + == sandbox_mtls_endpoint + ) + assert ( + VideoIntelligenceServiceClient._get_default_mtls_endpoint(non_googleapi) + == non_googleapi + ) + + +@pytest.mark.parametrize( + "client_class", + [VideoIntelligenceServiceClient, VideoIntelligenceServiceAsyncClient], +) +def test_video_intelligence_service_client_from_service_account_file(client_class): + creds = credentials.AnonymousCredentials() + with mock.patch.object( + service_account.Credentials, "from_service_account_file" + ) as factory: + factory.return_value = creds + client = client_class.from_service_account_file("dummy/file/path.json") + assert client.transport._credentials == creds + + client = client_class.from_service_account_json("dummy/file/path.json") + assert client.transport._credentials == creds + + assert client.transport._host == "videointelligence.googleapis.com:443" + + +def test_video_intelligence_service_client_get_transport_class(): + transport = VideoIntelligenceServiceClient.get_transport_class() + assert transport == transports.VideoIntelligenceServiceGrpcTransport + + transport = VideoIntelligenceServiceClient.get_transport_class("grpc") + assert transport == transports.VideoIntelligenceServiceGrpcTransport + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + ), + ], +) +@mock.patch.object( + VideoIntelligenceServiceClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(VideoIntelligenceServiceClient), +) +@mock.patch.object( + VideoIntelligenceServiceAsyncClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(VideoIntelligenceServiceAsyncClient), +) +def test_video_intelligence_service_client_client_options( + client_class, transport_class, transport_name +): + # Check that if channel is provided we won't create a new one. + with mock.patch.object( + VideoIntelligenceServiceClient, "get_transport_class" + ) as gtc: + transport = transport_class(credentials=credentials.AnonymousCredentials()) + client = client_class(transport=transport) + gtc.assert_not_called() + + # Check that if channel is provided via str we will create a new one. + with mock.patch.object( + VideoIntelligenceServiceClient, "get_transport_class" + ) as gtc: + client = client_class(transport=transport_name) + gtc.assert_called() + + # Check the case api_endpoint is provided. + options = client_options.ClientOptions(api_endpoint="squid.clam.whelk") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host="squid.clam.whelk", + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is + # "never". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "never"}): + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT is + # "always". + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "always"}): + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_MTLS_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case api_endpoint is not provided and GOOGLE_API_USE_MTLS_ENDPOINT has + # unsupported value. + with mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "Unsupported"}): + with pytest.raises(MutualTLSChannelError): + client = client_class() + + # Check the case GOOGLE_API_USE_CLIENT_CERTIFICATE has unsupported value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": "Unsupported"} + ): + with pytest.raises(ValueError): + client = client_class() + + # Check the case quota_project_id is provided + options = client_options.ClientOptions(quota_project_id="octopus") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id="octopus", + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name,use_client_cert_env", + [ + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + "true", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + "true", + ), + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + "false", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + "false", + ), + ], +) +@mock.patch.object( + VideoIntelligenceServiceClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(VideoIntelligenceServiceClient), +) +@mock.patch.object( + VideoIntelligenceServiceAsyncClient, + "DEFAULT_ENDPOINT", + modify_default_endpoint(VideoIntelligenceServiceAsyncClient), +) +@mock.patch.dict(os.environ, {"GOOGLE_API_USE_MTLS_ENDPOINT": "auto"}) +def test_video_intelligence_service_client_mtls_env_auto( + client_class, transport_class, transport_name, use_client_cert_env +): + # This tests the endpoint autoswitch behavior. Endpoint is autoswitched to the default + # mtls endpoint, if GOOGLE_API_USE_CLIENT_CERTIFICATE is "true" and client cert exists. + + # Check the case client_cert_source is provided. Whether client cert is used depends on + # GOOGLE_API_USE_CLIENT_CERTIFICATE value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + options = client_options.ClientOptions( + client_cert_source=client_cert_source_callback + ) + with mock.patch.object(transport_class, "__init__") as patched: + ssl_channel_creds = mock.Mock() + with mock.patch( + "grpc.ssl_channel_credentials", return_value=ssl_channel_creds + ): + patched.return_value = None + client = client_class(client_options=options) + + if use_client_cert_env == "false": + expected_ssl_channel_creds = None + expected_host = client.DEFAULT_ENDPOINT + else: + expected_ssl_channel_creds = ssl_channel_creds + expected_host = client.DEFAULT_MTLS_ENDPOINT + + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=expected_host, + scopes=None, + ssl_channel_credentials=expected_ssl_channel_creds, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case ADC client cert is provided. Whether client cert is used depends on + # GOOGLE_API_USE_CLIENT_CERTIFICATE value. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + with mock.patch.object(transport_class, "__init__") as patched: + with mock.patch( + "google.auth.transport.grpc.SslCredentials.__init__", return_value=None + ): + with mock.patch( + "google.auth.transport.grpc.SslCredentials.is_mtls", + new_callable=mock.PropertyMock, + ) as is_mtls_mock: + with mock.patch( + "google.auth.transport.grpc.SslCredentials.ssl_credentials", + new_callable=mock.PropertyMock, + ) as ssl_credentials_mock: + if use_client_cert_env == "false": + is_mtls_mock.return_value = False + ssl_credentials_mock.return_value = None + expected_host = client.DEFAULT_ENDPOINT + expected_ssl_channel_creds = None + else: + is_mtls_mock.return_value = True + ssl_credentials_mock.return_value = mock.Mock() + expected_host = client.DEFAULT_MTLS_ENDPOINT + expected_ssl_channel_creds = ( + ssl_credentials_mock.return_value + ) + + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=expected_host, + scopes=None, + ssl_channel_credentials=expected_ssl_channel_creds, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + # Check the case client_cert_source and ADC client cert are not provided. + with mock.patch.dict( + os.environ, {"GOOGLE_API_USE_CLIENT_CERTIFICATE": use_client_cert_env} + ): + with mock.patch.object(transport_class, "__init__") as patched: + with mock.patch( + "google.auth.transport.grpc.SslCredentials.__init__", return_value=None + ): + with mock.patch( + "google.auth.transport.grpc.SslCredentials.is_mtls", + new_callable=mock.PropertyMock, + ) as is_mtls_mock: + is_mtls_mock.return_value = False + patched.return_value = None + client = client_class() + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + ), + ], +) +def test_video_intelligence_service_client_client_options_scopes( + client_class, transport_class, transport_name +): + # Check the case scopes are provided. + options = client_options.ClientOptions(scopes=["1", "2"],) + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file=None, + host=client.DEFAULT_ENDPOINT, + scopes=["1", "2"], + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +@pytest.mark.parametrize( + "client_class,transport_class,transport_name", + [ + ( + VideoIntelligenceServiceClient, + transports.VideoIntelligenceServiceGrpcTransport, + "grpc", + ), + ( + VideoIntelligenceServiceAsyncClient, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + "grpc_asyncio", + ), + ], +) +def test_video_intelligence_service_client_client_options_credentials_file( + client_class, transport_class, transport_name +): + # Check the case credentials file is provided. + options = client_options.ClientOptions(credentials_file="credentials.json") + with mock.patch.object(transport_class, "__init__") as patched: + patched.return_value = None + client = client_class(client_options=options) + patched.assert_called_once_with( + credentials=None, + credentials_file="credentials.json", + host=client.DEFAULT_ENDPOINT, + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +def test_video_intelligence_service_client_client_options_from_dict(): + with mock.patch( + "google.cloud.videointelligence_v1p3beta1.services.video_intelligence_service.transports.VideoIntelligenceServiceGrpcTransport.__init__" + ) as grpc_transport: + grpc_transport.return_value = None + client = VideoIntelligenceServiceClient( + client_options={"api_endpoint": "squid.clam.whelk"} + ) + grpc_transport.assert_called_once_with( + credentials=None, + credentials_file=None, + host="squid.clam.whelk", + scopes=None, + ssl_channel_credentials=None, + quota_project_id=None, + client_info=transports.base.DEFAULT_CLIENT_INFO, + ) + + +def test_annotate_video( + transport: str = "grpc", request_type=video_intelligence.AnnotateVideoRequest +): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.annotate_video), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/spam") + + response = client.annotate_video(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + + assert args[0] == video_intelligence.AnnotateVideoRequest() + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +def test_annotate_video_from_dict(): + test_annotate_video(request_type=dict) + + +@pytest.mark.asyncio +async def test_annotate_video_async( + transport: str = "grpc_asyncio", + request_type=video_intelligence.AnnotateVideoRequest, +): + client = VideoIntelligenceServiceAsyncClient( + credentials=credentials.AnonymousCredentials(), transport=transport, + ) + + # Everything is optional in proto3 as far as the runtime is concerned, + # and we are mocking out the actual API, so just send an empty request. + request = request_type() + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.annotate_video), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + + response = await client.annotate_video(request) + + # Establish that the underlying gRPC stub method was called. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + + assert args[0] == video_intelligence.AnnotateVideoRequest() + + # Establish that the response is the type that we expect. + assert isinstance(response, future.Future) + + +@pytest.mark.asyncio +async def test_annotate_video_async_from_dict(): + await test_annotate_video_async(request_type=dict) + + +def test_annotate_video_flattened(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.annotate_video), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + client.annotate_video( + input_uri="input_uri_value", + features=[video_intelligence.Feature.LABEL_DETECTION], + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) == 1 + _, args, _ = call.mock_calls[0] + + assert args[0].input_uri == "input_uri_value" + + assert args[0].features == [video_intelligence.Feature.LABEL_DETECTION] + + +def test_annotate_video_flattened_error(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + client.annotate_video( + video_intelligence.AnnotateVideoRequest(), + input_uri="input_uri_value", + features=[video_intelligence.Feature.LABEL_DETECTION], + ) + + +@pytest.mark.asyncio +async def test_annotate_video_flattened_async(): + client = VideoIntelligenceServiceAsyncClient( + credentials=credentials.AnonymousCredentials(), + ) + + # Mock the actual call within the gRPC stub, and fake the request. + with mock.patch.object(type(client.transport.annotate_video), "__call__") as call: + # Designate an appropriate return value for the call. + call.return_value = operations_pb2.Operation(name="operations/op") + + call.return_value = grpc_helpers_async.FakeUnaryUnaryCall( + operations_pb2.Operation(name="operations/spam") + ) + # Call the method with a truthy value for each flattened field, + # using the keyword arguments to the method. + response = await client.annotate_video( + input_uri="input_uri_value", + features=[video_intelligence.Feature.LABEL_DETECTION], + ) + + # Establish that the underlying call was made with the expected + # request object values. + assert len(call.mock_calls) + _, args, _ = call.mock_calls[0] + + assert args[0].input_uri == "input_uri_value" + + assert args[0].features == [video_intelligence.Feature.LABEL_DETECTION] + + +@pytest.mark.asyncio +async def test_annotate_video_flattened_error_async(): + client = VideoIntelligenceServiceAsyncClient( + credentials=credentials.AnonymousCredentials(), + ) + + # Attempting to call a method with both a request object and flattened + # fields is an error. + with pytest.raises(ValueError): + await client.annotate_video( + video_intelligence.AnnotateVideoRequest(), + input_uri="input_uri_value", + features=[video_intelligence.Feature.LABEL_DETECTION], + ) + + +def test_credentials_transport_error(): + # It is an error to provide credentials and a transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), transport=transport, + ) + + # It is an error to provide a credentials file and a transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = VideoIntelligenceServiceClient( + client_options={"credentials_file": "credentials.json"}, + transport=transport, + ) + + # It is an error to provide scopes and a transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + with pytest.raises(ValueError): + client = VideoIntelligenceServiceClient( + client_options={"scopes": ["1", "2"]}, transport=transport, + ) + + +def test_transport_instance(): + # A client may be instantiated with a custom transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + client = VideoIntelligenceServiceClient(transport=transport) + assert client.transport is transport + + +def test_transport_get_channel(): + # A client may be instantiated with a custom transport instance. + transport = transports.VideoIntelligenceServiceGrpcTransport( + credentials=credentials.AnonymousCredentials(), + ) + channel = transport.grpc_channel + assert channel + + transport = transports.VideoIntelligenceServiceGrpcAsyncIOTransport( + credentials=credentials.AnonymousCredentials(), + ) + channel = transport.grpc_channel + assert channel + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.VideoIntelligenceServiceGrpcTransport, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + ], +) +def test_transport_adc(transport_class): + # Test default credentials are used if not provided. + with mock.patch.object(auth, "default") as adc: + adc.return_value = (credentials.AnonymousCredentials(), None) + transport_class() + adc.assert_called_once() + + +def test_transport_grpc_default(): + # A client should use the gRPC transport by default. + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + ) + assert isinstance( + client.transport, transports.VideoIntelligenceServiceGrpcTransport, + ) + + +def test_video_intelligence_service_base_transport_error(): + # Passing both a credentials object and credentials_file should raise an error + with pytest.raises(exceptions.DuplicateCredentialArgs): + transport = transports.VideoIntelligenceServiceTransport( + credentials=credentials.AnonymousCredentials(), + credentials_file="credentials.json", + ) + + +def test_video_intelligence_service_base_transport(): + # Instantiate the base transport. + with mock.patch( + "google.cloud.videointelligence_v1p3beta1.services.video_intelligence_service.transports.VideoIntelligenceServiceTransport.__init__" + ) as Transport: + Transport.return_value = None + transport = transports.VideoIntelligenceServiceTransport( + credentials=credentials.AnonymousCredentials(), + ) + + # Every method on the transport should just blindly + # raise NotImplementedError. + methods = ("annotate_video",) + for method in methods: + with pytest.raises(NotImplementedError): + getattr(transport, method)(request=object()) + + # Additionally, the LRO client (a property) should + # also raise NotImplementedError + with pytest.raises(NotImplementedError): + transport.operations_client + + +def test_video_intelligence_service_base_transport_with_credentials_file(): + # Instantiate the base transport with a credentials file + with mock.patch.object( + auth, "load_credentials_from_file" + ) as load_creds, mock.patch( + "google.cloud.videointelligence_v1p3beta1.services.video_intelligence_service.transports.VideoIntelligenceServiceTransport._prep_wrapped_messages" + ) as Transport: + Transport.return_value = None + load_creds.return_value = (credentials.AnonymousCredentials(), None) + transport = transports.VideoIntelligenceServiceTransport( + credentials_file="credentials.json", quota_project_id="octopus", + ) + load_creds.assert_called_once_with( + "credentials.json", + scopes=("https://www.googleapis.com/auth/cloud-platform",), + quota_project_id="octopus", + ) + + +def test_video_intelligence_service_base_transport_with_adc(): + # Test the default credentials are used if credentials and credentials_file are None. + with mock.patch.object(auth, "default") as adc, mock.patch( + "google.cloud.videointelligence_v1p3beta1.services.video_intelligence_service.transports.VideoIntelligenceServiceTransport._prep_wrapped_messages" + ) as Transport: + Transport.return_value = None + adc.return_value = (credentials.AnonymousCredentials(), None) + transport = transports.VideoIntelligenceServiceTransport() + adc.assert_called_once() + + +def test_video_intelligence_service_auth_adc(): + # If no credentials are provided, we should use ADC credentials. + with mock.patch.object(auth, "default") as adc: + adc.return_value = (credentials.AnonymousCredentials(), None) + VideoIntelligenceServiceClient() + adc.assert_called_once_with( + scopes=("https://www.googleapis.com/auth/cloud-platform",), + quota_project_id=None, + ) + + +def test_video_intelligence_service_transport_auth_adc(): + # If credentials and host are not provided, the transport class should use + # ADC credentials. + with mock.patch.object(auth, "default") as adc: + adc.return_value = (credentials.AnonymousCredentials(), None) + transports.VideoIntelligenceServiceGrpcTransport( + host="squid.clam.whelk", quota_project_id="octopus" + ) + adc.assert_called_once_with( + scopes=("https://www.googleapis.com/auth/cloud-platform",), + quota_project_id="octopus", + ) + + +def test_video_intelligence_service_host_no_port(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + client_options=client_options.ClientOptions( + api_endpoint="videointelligence.googleapis.com" + ), + ) + assert client.transport._host == "videointelligence.googleapis.com:443" + + +def test_video_intelligence_service_host_with_port(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), + client_options=client_options.ClientOptions( + api_endpoint="videointelligence.googleapis.com:8000" + ), + ) + assert client.transport._host == "videointelligence.googleapis.com:8000" + + +def test_video_intelligence_service_grpc_transport_channel(): + channel = grpc.insecure_channel("http://localhost/") + + # Check that channel is used if provided. + transport = transports.VideoIntelligenceServiceGrpcTransport( + host="squid.clam.whelk", channel=channel, + ) + assert transport.grpc_channel == channel + assert transport._host == "squid.clam.whelk:443" + assert transport._ssl_channel_credentials == None + + +def test_video_intelligence_service_grpc_asyncio_transport_channel(): + channel = aio.insecure_channel("http://localhost/") + + # Check that channel is used if provided. + transport = transports.VideoIntelligenceServiceGrpcAsyncIOTransport( + host="squid.clam.whelk", channel=channel, + ) + assert transport.grpc_channel == channel + assert transport._host == "squid.clam.whelk:443" + assert transport._ssl_channel_credentials == None + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.VideoIntelligenceServiceGrpcTransport, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + ], +) +def test_video_intelligence_service_transport_channel_mtls_with_client_cert_source( + transport_class, +): + with mock.patch( + "grpc.ssl_channel_credentials", autospec=True + ) as grpc_ssl_channel_cred: + with mock.patch.object( + transport_class, "create_channel", autospec=True + ) as grpc_create_channel: + mock_ssl_cred = mock.Mock() + grpc_ssl_channel_cred.return_value = mock_ssl_cred + + mock_grpc_channel = mock.Mock() + grpc_create_channel.return_value = mock_grpc_channel + + cred = credentials.AnonymousCredentials() + with pytest.warns(DeprecationWarning): + with mock.patch.object(auth, "default") as adc: + adc.return_value = (cred, None) + transport = transport_class( + host="squid.clam.whelk", + api_mtls_endpoint="mtls.squid.clam.whelk", + client_cert_source=client_cert_source_callback, + ) + adc.assert_called_once() + + grpc_ssl_channel_cred.assert_called_once_with( + certificate_chain=b"cert bytes", private_key=b"key bytes" + ) + grpc_create_channel.assert_called_once_with( + "mtls.squid.clam.whelk:443", + credentials=cred, + credentials_file=None, + scopes=("https://www.googleapis.com/auth/cloud-platform",), + ssl_credentials=mock_ssl_cred, + quota_project_id=None, + ) + assert transport.grpc_channel == mock_grpc_channel + assert transport._ssl_channel_credentials == mock_ssl_cred + + +@pytest.mark.parametrize( + "transport_class", + [ + transports.VideoIntelligenceServiceGrpcTransport, + transports.VideoIntelligenceServiceGrpcAsyncIOTransport, + ], +) +def test_video_intelligence_service_transport_channel_mtls_with_adc(transport_class): + mock_ssl_cred = mock.Mock() + with mock.patch.multiple( + "google.auth.transport.grpc.SslCredentials", + __init__=mock.Mock(return_value=None), + ssl_credentials=mock.PropertyMock(return_value=mock_ssl_cred), + ): + with mock.patch.object( + transport_class, "create_channel", autospec=True + ) as grpc_create_channel: + mock_grpc_channel = mock.Mock() + grpc_create_channel.return_value = mock_grpc_channel + mock_cred = mock.Mock() + + with pytest.warns(DeprecationWarning): + transport = transport_class( + host="squid.clam.whelk", + credentials=mock_cred, + api_mtls_endpoint="mtls.squid.clam.whelk", + client_cert_source=None, + ) + + grpc_create_channel.assert_called_once_with( + "mtls.squid.clam.whelk:443", + credentials=mock_cred, + credentials_file=None, + scopes=("https://www.googleapis.com/auth/cloud-platform",), + ssl_credentials=mock_ssl_cred, + quota_project_id=None, + ) + assert transport.grpc_channel == mock_grpc_channel + + +def test_video_intelligence_service_grpc_lro_client(): + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), transport="grpc", + ) + transport = client.transport + + # Ensure that we have a api-core operations client. + assert isinstance(transport.operations_client, operations_v1.OperationsClient,) + + # Ensure that subsequent calls to the property send the exact same object. + assert transport.operations_client is transport.operations_client + + +def test_video_intelligence_service_grpc_lro_async_client(): + client = VideoIntelligenceServiceAsyncClient( + credentials=credentials.AnonymousCredentials(), transport="grpc_asyncio", + ) + transport = client.transport + + # Ensure that we have a api-core operations client. + assert isinstance(transport.operations_client, operations_v1.OperationsAsyncClient,) + + # Ensure that subsequent calls to the property send the exact same object. + assert transport.operations_client is transport.operations_client + + +def test_common_billing_account_path(): + billing_account = "squid" + + expected = "billingAccounts/{billing_account}".format( + billing_account=billing_account, + ) + actual = VideoIntelligenceServiceClient.common_billing_account_path(billing_account) + assert expected == actual + + +def test_parse_common_billing_account_path(): + expected = { + "billing_account": "clam", + } + path = VideoIntelligenceServiceClient.common_billing_account_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_billing_account_path(path) + assert expected == actual + + +def test_common_folder_path(): + folder = "whelk" + + expected = "folders/{folder}".format(folder=folder,) + actual = VideoIntelligenceServiceClient.common_folder_path(folder) + assert expected == actual + + +def test_parse_common_folder_path(): + expected = { + "folder": "octopus", + } + path = VideoIntelligenceServiceClient.common_folder_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_folder_path(path) + assert expected == actual + + +def test_common_organization_path(): + organization = "oyster" + + expected = "organizations/{organization}".format(organization=organization,) + actual = VideoIntelligenceServiceClient.common_organization_path(organization) + assert expected == actual + + +def test_parse_common_organization_path(): + expected = { + "organization": "nudibranch", + } + path = VideoIntelligenceServiceClient.common_organization_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_organization_path(path) + assert expected == actual + + +def test_common_project_path(): + project = "cuttlefish" + + expected = "projects/{project}".format(project=project,) + actual = VideoIntelligenceServiceClient.common_project_path(project) + assert expected == actual + + +def test_parse_common_project_path(): + expected = { + "project": "mussel", + } + path = VideoIntelligenceServiceClient.common_project_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_project_path(path) + assert expected == actual + + +def test_common_location_path(): + project = "winkle" + location = "nautilus" + + expected = "projects/{project}/locations/{location}".format( + project=project, location=location, + ) + actual = VideoIntelligenceServiceClient.common_location_path(project, location) + assert expected == actual + + +def test_parse_common_location_path(): + expected = { + "project": "scallop", + "location": "abalone", + } + path = VideoIntelligenceServiceClient.common_location_path(**expected) + + # Check that the path construction is reversible. + actual = VideoIntelligenceServiceClient.parse_common_location_path(path) + assert expected == actual + + +def test_client_withDEFAULT_CLIENT_INFO(): + client_info = gapic_v1.client_info.ClientInfo() + + with mock.patch.object( + transports.VideoIntelligenceServiceTransport, "_prep_wrapped_messages" + ) as prep: + client = VideoIntelligenceServiceClient( + credentials=credentials.AnonymousCredentials(), client_info=client_info, + ) + prep.assert_called_once_with(client_info) + + with mock.patch.object( + transports.VideoIntelligenceServiceTransport, "_prep_wrapped_messages" + ) as prep: + transport_class = VideoIntelligenceServiceClient.get_transport_class() + transport = transport_class( + credentials=credentials.AnonymousCredentials(), client_info=client_info, + ) + prep.assert_called_once_with(client_info)